Hacker News new | past | comments | ask | show | jobs | submit | lmilcin's comments login

I only have one rule: I only contact an open source developer if I believe resolving the problem would make the product better unless I am willing to offer them reasonable money to do it.


The preference of the developer doesn't feed into your decision to contact them? What if the repo readme says "this is provided as-is, I don't offer support"?


A counterpoint:

You may view this feature as making the product better, but it might not fit the vision of the owner/developer and to them it will only generate a maintenance burden and bloat.


You can't know that before you contact them, though.


Doesn't (almost) every bug report/feature request meet this criterion?


This post is about when it is or isn't appropriate to directly contact the principal developer instead of using the established Slack or Github flows.


Polish guy here. And Lem fan, too. Ask anything you want.


Yes and yes.

As far as we know anti-atoms (antihydrogen in this case) are as stable as normal atoms.

To the point where it creates interesting questions -- if antiatoms are exactly as normal atoms, why we have abundance of normal matter but not antimatter?


> I don’t think the the utility of a tank is completely obviated.

The whole point of tank, as the word suggests, is to be able to survive enemy fire.

With proliferation of easy to carry weapons that can pierce any tank it is largely relegated to being heavy, costly and fragile mobile cannon that needs a lot of support to stay alive. There are much better devices that can fill those roles.

You should no longer assume that you can ambush anything with your tanks -- with live overhead feed it is easy to spot any tanks encroaching on your position and place any antitank in the right spot.

And then you have drones that you can basically point and shoot any tank from.

I am pretty sure this is the last war where we see large number of tanks involved. Every country that is watching this is click spamming to buy as many drones as possible.


Tanks have always co-evolved against their threats. The reason top armor is weak is because the primary threat when the bulk of these designs were originated (60s/70s) was from other tanks or direct-fire guns. Precision indirect fire munitions weren't yet a major threat.

So, when you're rearchitecting a tank today, you're going to protect it against the now-dominant threats.

At its base, a tank is a propulsion system, a gun, and a set of survivability options.

The first two are always going to be relatively expensive, in quantity. So the last gets defined and scaled to meet the expected threat.


I guess you can imagine similar evolution that a century ago drove invention of aircraft carriers. When it became clear that battleships would become too large, too heavy and too expensive to meet their primary goal of dominating the sea around them.

Ie mobile platforms that are essentially defenceless on their own but carry large armament of drones and other electronic devices inside enemy territory that is meant to quickly take over surrounding space (surface and overhead) and do quick job of neutralising various threats like enemy personnel, drones, etc.

But I am not sure about that. Planes require a landing strip to start from and large hangars to store them and that drove the basic form of aircraft carrier.

There is no such limitation for electronic equipment and small drone carriers travelling on land. And I think, rather than presenting a single high value target to the enemy, it makes sense to have a lot of specialised units functioning as one through information systems that cannot be disabled with a single successful strike.


https://www.gd.com/Articles/2021/10/06/general-dynamics-at-a...

See: GD's TRX concept w/ loitering munitions and a tethered surveillance UAV

> [vs distributed systems]

See: USAF / USN Next Generation Air Dominance programs, or the European Future Combat Air System, which are declared as "systems of systems" to get the desired capabilities. We'll see how far they ultimately lean into distributed though.


> This vulnerability affects parsing maliciously crafted certificates, so it will mostly affect clients.

Actually, it is the opposite.

You seem to be unaware of the fact that servers do receive certificates from the clients which are then parsed.

Which is already mentioned in the advisory document:

  "Thus vulnerable situations include:

   - TLS clients consuming server certificates
   - TLS servers consuming client certificates <---- here
   - Hosting providers taking certificates or private keys from customers
   - Certificate authorities parsing certification requests from subscribers
   - Anything else which parses ASN.1 elliptic curve parameters"


That's not how the TLS handshake works. The TLS server must be configured to request a certificate from the client in order for the client to know that it needs to send a client certificate to the server, and that server-side configuration is disabled for ~99%+ of endpoints.

TLS server implementations should be aborting the TLS connection for violating the TLS Handshake state machine if a client attempts to send a client certificate when it wasn't requested.

So while this bug affects both clients and servers, 100% of clients are parsing the server's TLS cert during the TLS handshake, but less than ~1% of servers are parsing a client's certificate during a handshake.


There is very little reason to DOS a client and a lot of reasons to attack servers.

There is a huge number of public facing services that implement mutual auth and all those are potentially vulnerable to DOS. While clients can just decide to not connect to a web service that causes their browser to malfunction (and why have you connected there in the first place?), services are usually not at liberty to ignore a client at this stage.

So yes, those servers that do request client certificate are targets and my point still stands that servers are much more affected than the clients.

What would be an affected client? You keep connecting to this infected website that causes your browser to die? Somebody embedded some tracking on their page that now points to an infected website? Everybody will just move on and it is hard to say you are very much affected by this problem.

Whereas if you are a service and you are affected you absolutely need to implement a fix.


correction: literally 99.99999% of endpoints. they use password^W SMS authentication instead. no seriously, how is the only one good thing about X.509 (authentication via public key) the only unused part (to be fair, if anyone used it, wed have a whole new 10 episodes of vuln disclosures).


> correction: literally 99.99999% of endpoints.

You made up a number with no grounding in reality because of your bias due to being "general public".

For corporate services it is actually quite common to use client certificates and mutual auth. Also popular with VPNs.

You might not be aware of this because corporations do not want to deal with people who do not know or can be forced to know how to generate signing request.

This is different when you control both the service and the users of the service and you have something valuable to protect.

As an example, I worked with credit card terminals and these used mutual auth with properly managed client certificates.

You wouldn't call DOS on all terminals and ATMS "insignificant".


No, I was purely focused on public web. As for your corporate services, those are all insecure as hell despite whaever tech they use. Anything that's remotely hidden from public in any way historically was uncovered to have non stop, gaping, and obvious security holes, even after being corrected 1-5 times. This is a result of the way businesses are run as miniature reactionary states ("just ban people in the firewall brother. call the police").


I looked at the data from some places I knew very well for over 3 decades (like around where I grew up and are still visiting parents regularly) and it looks noisy and mostly incorrect, nothing like real tree cover change.


I think the main problem here is not necessarily the actual extent of the ice but rather rapid change and how we, people, are dependant on particular climate in particular parts of our planet.

We are all dependant on very fragile balance of various mechanisms that we do not fully understand.

For example, European climate depends very much on the mass of warm water transported by Gulfstream. Europe would be basically north Canada if not for all that warm water and precipitation that comes with it. But we also know that this stream itself depends on the water cooling up north and sinking to complete the cycle. If the water can't cool the cycle will be broken and Europe may suddenly change the climate dramatically at an astonishing rate.

I am not worried about plant and animal life -- these will migrate or adapt. Nature has always found a way in the past.

What I am worried is human toll, masses of people affected by rapid climate change that are unable to fend for themselves.


You can't "balance carbon cycle" because the problem is not the carbon cycle.

Carbon cycle is fine.

The problem is there is a huge amount of additional carbon that the cycle cannot accommodate. If I remember well, every decade we are adding more carbon than entire weight of biosphere.

Also, plants have higher albedo than most types of ground. Remember, they are built to capture light. This means they cause Earth to capture more energy.

Plants don't do much to atmospheric carbon if they can't be sequestered. Otherwise they just burn or rot -- causing carbon to return back to atmosphere.

The conditions for the plant carbon to be sequestered are not favourable nowadays.

I love plants and I want more plants and trees and all that. Just don't be naive thinking this is going to solve the problem.

Because that naivety more often than not causes lay people thinking something is being done when trees are being planted which is bad for us who would like something real done.


There is a lot of occasions for observations if you are into it.

This photo is not a happy accident, though. It took careful preparation.

Let's see... it looks that at a distance of 400km we can see features of size roughly 1m (or even better). This points to resolution of 0.01 arcsecond which is fenomenal for an amateur setup.


That's not quite right. 1 meter at 400 km distance is ~0.51 arcseconds, which is on the edge of doable with good seeing. 0.01 arcseconds would never be possible within the atmosphere.


The most depressing statistic here (for me anyway) is that to directly image the nearest exoplanet at 1km per pixel we're going to need a telescope with ~.000000005 arcseconds of resolution.

Someone check my math but that would be like imaging the ISS at the nanometer scale from the surface of the earth.


I heard about this fascinating idea of using the gravitational lensing of our own sun to image the surface of exoplanets: https://www.youtube.com/watch?v=NQFqDKRAROI

It wouldn't look like any telescope you might have ever seen. Once we have a candidate exoplanet we want to take a picture of we would launch a flock of free-flying solar-sail propelled satellites in such an orbit that they get yeeted away from the sun on a trajectory opposite of the target exoplanet. They would travel to the "focal plane" of the sun's gravitational lens where the exoplanet's light is smeared to a ring around the sun which they collaboratively capture. Probably one such a pass wouldn't be enough, so we would need to send such flocks multiple times, like waves following on each other.

What I love about the plan is that it is both super scifi, yet we already have all the components to make it happen if we want to.


Now compute what kind of virtual aperture size it takes to get this resolution without being diffraction limited.

Edit: I did it, about 25000km (for light with a wavelength of 500nm), or twice the radius of the earth. That actually suggests it could be doable with a constellation of telescopes in high orbit.


Or a gigantic obstruction: https://www.nasa.gov/content/the-aragoscope-ultra-high-resol...

Sadly the occulder has to be smooth at sub wavelength scales, or solar system bodies could be used.


I couldn't find a quick summary to this question. But what size aragascope do you need to achieve the equivalent of an x meter aperture?

My gut says probably the same size, but the claims suggest the aragascope can actually be smaller. My gut can also imagine it depends on the distance between the aragascope and the telescope.


> the claims suggest the aragascope can actually be smaller.

This may be because of the shape of the PSF is different from the normal airy disk one.

Here is a random google result showing the spot of arago, https://www.lighttrans.com/use-cases/application/observation... -- which looks to me like it would have poor contrast but good resolution. Though I'm out of my depth so it could be nonsense. :P

Edit: Ah, yeah the graph at figure 9 in the report linked on the linked page shows something like that.


The synopsis on the site suggests the same size as the disk. But I guess it doesn't say it scales the same as with mirror size.

> can be used to achieve the diffraction limit based on the size of the low cost disk, rather than the high cost telescope mirror


Believe or not, that project is already underway and will use our Sun as a gravitational lens.

https://en.wikipedia.org/wiki/Solar_gravitational_lens

I predict in couple decades we will learn to build swarms of drone craft that we will send to the right location and they will be able to image nearby planets (one per swarm...) with at least ~10-40km per pixel if not better.


“Underway “ is a little strong. Putting something at 500 AU is still not really feasible within a reasonable timeframe.

But I think there are exciting things to come up in the next decades for sure.


1km would be fantastic but I think 1000km would already give us a ton of information.


A few months ago the ISS passed through my city during sunset, while me and my family were hanging out outside. It was an amazing experience. I saw it first, it was reflecting the sun so it looked like a shooting star but it was too slow for a shooting star and too fast for a plane. It was too bright for it to be satellite? So I googled ISS position and it matched! We were able too see it for a while.


No, the title is not a clickbait. This is best photo of ISS transiting Moon I have seen, by a large margin.


Indeed. There are also pretty stunning transits of the Sun (with Canadarm and Crew Dragon resolved) on the photographer's website [1]. But Moon transits are likely much more challenging since it is much fainter as a background than the Sun.

1. http://www.astrophoto.fr/iss_transits_june2020.html


I would guess the Moon is actually much brighter once you filter the Sun to safe levels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: