Hacker News new | past | comments | ask | show | jobs | submit | popey456963's comments login

Google's Project Zero.


To be fair, you need _significantly_ less[0] uranium compared with the amount of coal you mine. For every million mountains destroyed by coal mining, you'd only have to destroy a couple of mountains. Not to mention I don't think uranium is usually mined from mountains but rather pit-mined, makes it far better than the alternatives.

Personally however, still support solar. Especially solar house tiles, they look good, pay for themselves after a few years and can be placed on every house.

[0] https://xkcd.com/1162/


Not every house has solar resource. If your house is in the middle of the woods it's not going to work unless you clear-cut the surrounding woods and then you're not in the middle of the woods anymore. Same if you are in the shadow of a couple of mid-rises. etc. etc. Even if there is solar resource it may not replace more than a fraction of usage. Roof-top solar is great where the resource exists, but it's a partial solution at best. There's still a need for the utility grid and something needs to fuel that. It could be utility-scale solar, but it's not onsite rooftop solar.


You realize that most houses are not in the middle of the woods? If I look around here in Berlin, almost all houses would qualify for rooftop solar. There is of course still the storage problem.


You do know that a significant amount of coal is mined without the destruction of mountains, right?


Reminds me of this chart: https://www.youtube.com/watch?v=PI99A08Y83E&t=13m45s (if you want to skip to the punch line, it's just after 15:00), but this one is still in log scale.


Why on Earth could they possibly feel it necessary to do this? The United States Patent Office doesn't have a complex system of sub-domains or even an EV license, if money were the object then they could just go with Let's Encrypt (not to mention the current license continues until 2018 anyway).

The amount of computing power it takes to encrypt with SSL is minimal, especially if you use some of the newer systems like ECDSA and should not be of concern to a company like the Patent Office.


You think they have a free hand, but they don't. People who work are federal agencies are hemmed in by a thicket of rules that prevent them from just entering into an agreement with a private vendor who has not been through a qualification process. Likewise technicians and administrators are enjoined from spending money on their own initiative, eg you can't just see the problem, sign up for a renewable SSL certificate, and then shoot the bill over the accounts payable.

I don't want to speculate about what's going on at the managerial/administrative level, but I notice the current administration is committed to the goal of slashing most government spending by some huge amount while simultaneously cutting taxes. It may be that the head of the USPTO got a phone call telling them not to spend a single damn penny. Now, iirc the USPTO is actually self-financing on patent application fees, but I don't think they're so independent that they can just ignore directives from higher up in the executive branch.


Totally this. I have childhood friends who now work in government IT and I just can't wrap my head around how these people get anything done in those kinds of environments.


sounds like the US government is ripe for disruption...

/s


It is one of the rare organizations where the people responsible for the budget of the organization have long track records of trying to have them disrupted (or just eliminated).

Congress has often tried to undermine the ability of the EPA, IRS, NIH, NOAA... to do their job which then makes it seem they are ripe for disruption.


The Republicans are essentially government saboteurs, which is fascinating if you can look at it from a distance. Most countries' major conservative party don't do this. Quite frankly, I'd say that having a relatively small government is what saves the US from disaster, as countries with that level of government dysfunction are usually pretty poor.


> relatively small government

Are we talking about the same government??


Maybe they fired or lost the only person left who understood how it works, it broke, and whoever they yanked across the hall to get it going again could only do it this way. I think there's still a govt-wide hiring freeze.


Hiring freeze was lifted on April 12, 2017.


Plenty of time to hire someone and sort out all the issues!


At which point they started hiring only loyal idiots.


Maybe someone sued them over some highly-innovative encryption patent they've previously granted? :) /s


Well if you put on your tinfoil hat - maybe someone wants to track who's viewing which patents, which they can't do when it's encrypted. You're right, it doesn't make any sense to do this, so there must be an ulterior motive.


If that were the case and USPTO were in on the trick, why the need to drop HTTPS?

They'd have that data already, so could just share it directly.


This will allow ISPs to track who is viewing particular patents and when. That would be very lucrative data to sell in some circumstances. I doubt the USPTO would distribute a list of IP addresses that accessed a patent without some kind of due process.


I think this might be gutted out already though as big companies use proprietary databases which have enhanced data on the patents. Also google patents...


Yeah, I don't think it's actually their reason for the change. It's just one hypothetical consequence that the decision makers probably failed to consider. Still, the decision makers should be investigated for conflicts of interest because they've made a really fishy-smelling decision.


Didn't your country just drop the privacy protection rules that hindered ISPs from selling any American's browser history?


If they shared the data, they could get caught doing so. By simply removing HTTPS someone could intercept the requests on their own without any wrongdoing on the part of USPTO (aside from dropping HTTPS).


Plausible deniability? Shifting blame?

I'm just playing Devil's advocate here.


> Well if you put on your tinfoil hat - maybe someone wants to track who's viewing which patents, which they can't do when it's encrypted.

No, a third-party attacker can just look at size/timing of packets to figure out which page is being viewed, especially given it's among a limited and static corpus.


Tracking users over HTTPS is a solved problem, so I doubt that'd be it. Something about "never attribute to malice that which can be adequately explained by incompetence"?


What do you mean by that? Is knowing the URL of a HTTPS request a solved problem?


What I mean is that if a third party wanted tracking info all they'd have to do is pay for a tracking script to be injected. Let's say the patent office is okay with this. Why wouldn't they just include a <script src="evil.js"> instead of going through the trouble of disabling HTTPS just so a third party can get their eyes on the juicy information? Just as easily, patent office could sell access logs to interested parties. In that not-very-roundabout way, knowing the URL and who wants it is very much a solved problem.

If third party tracking (for malicious intent or otherwise) is the main reason behind the change, why not do it how everyone else does?

It stands to reason they just don't want to deal with SSL termination anymore, for whatever reason. Though, at least in my eyes, that's a solved problem too.


I would assume that they do it because HTTPS does complicate the pipeline on several different levels. If you want to tcpdump the https traffic, for example, you need to do SSL/TLS termination at the load balancer to get something readable. Most web servers don't make it easy to inject on the other side of the decryption; I remember having to enable some very verbose debug logs in nginx to accommodate this. Third-party tooling is necessary to take a dump of encrypted traffic and decrypt it for analysis.

It's also possible that their configuration was causing them performance problems and decreasing overhead by killing HTTPS for "unnecessary" endpoints was seen as a potential solution. Requesting a public record about a patent is not something that, at first glance, seems like it should need to be transferred over a secure protocol.

Of course, none of these are really good reasons to disable HTTPS, but they're some potential explanations.

-----

Separately, I think some people who remember HTTPS being used to secure "true secret" pages kind of resent the "HTTPS must be used anywhere and everywhere" trend that has taken hold. It's not that there aren't good reasons to do that, but it's also silly to pretend there aren't side effects of doing it.

From some perspectives, the need to encrypt all communication can be seen as an external concern for something like a VPN tunnel to handle. End-to-end crypto is good because it, theoretically, precludes reception from anyone who can get in the middle of the server and the VPN, but it needs to be more transparent before everyone is willing to consider that a worthwhile/important tradeoff.

One side effect of HTTPS everywhere is that the site can no longer really designate some portion of traffic as "secret". If every admin in your org needs to be able to decrypt all HTTPS traffic to debug issues, you're giving some access away. Maybe some of them would've been able to get to that data anyway, but probably many of them would not.

Again, this is not to say that that HTTPS shouldn't be used, but just some musings into why someone would not necessarily be enthusiastic about it. Working to integrate HTTPS more transparently to admins and working toward the ability to mark specific information for extra "app-layer secrecy" instead of just relying on transport-layer secrecy seem like they'd be good steps.


You could easily terminate SSL at the LB or even just a proxy in front of the app. Sniffing the line after that is as trivial as turning a mirror port on the switch. In this day and age SSL is trivial and there is honestly no good reason to disable it. In fact protecting users privacy is a good reason they should switch to SSL only.

I know you were only trying to coming up with some kind of reason but, there just isn't a valid one.


The grandparent comment touches upon the reality of working in most large organizations. There tend to be silo'd teams, which in more concrete terms translates into the "application/website" team and the "networking" team. Sometimes, there's a "system administration" team sandwiched in between.

HTTPS everywhere reduces the number of teams that used to, in the old "HTTP-only" world, serendipitously pitch in to help troubleshoot tickets. Now, instead of anybody within the network capable of sniffing HTTP packets, only one or two groups are limited to troubleshoot.

In your example, terminating SSL at the LB, or adding a proxy in front of the app, would either be an annoyance or major project, respectively. Small firms wouldn't think twice and would jump into action; but large organizations have too much internal inertia.

I see your point too, but the USPTO probably: a) is underfunded; and b) exhibits all the average capabilities and organizational "effectiveness" of a large bureaucracy.

Perhaps a better question is whether the USPTO would object to having their site content mirrored by a 3rd party better capable of offering features that users are complaining about (HTTPS & better search). Google has their own version[1].

[1] https://patents.google.com/


Having worked doing independent full-stack web design I'd expect an individual could, from scratch, set up a working system with load balancing and failover in perhaps 2-3 weeks ... an experience team should surely be able to do that in their sleep inside a week?

What would be others expectation for such a service? USPTO do have a web team, yes? That site has been the same for over a decade AFAIR, what have they been doing?


The issue here is that even if you could do it, the question when working in a large bureaucracy is whether you're given the latitude to do so when teams are silo'd and responsibilities are split along very sharp fault lines.

So, even though you could setup a working system top to bottom from scratch, depending on which team you're attached to you'd have to design, explain/argue and work with other teams and their overbearing workloads and attendant baggage.

An experienced team with full ownership of load balancers, firewalls, hosts, security and applications could conceivably do this in their sleep inside of a week -- but few large organizations split their responsibilities in this manner.

I don't know anything about USPTO and am only making sweeping generalizations, but my experience with government and large network operators seems to generalize well (so far).

This is why I hope that other 3rd parties can somehow export USPTO data and make them available somehow. Not sure if this is a potential target for the Archive Team[1] or archive.org, although the missions aren't cleanly aligned to this particular need.

[1] http://archiveteam.org/index.php?title=Main_Page


It is Friday? Roughly 4PM on Friday, which means they'll likely be closing down shortly. Seems odd for them to do it as such a last minute thing.


You can source Li-ion/Li-Poly fairly easily (as a company, almost impossible as an individual) with 250 Wh/kg, I believe I read somewhere that Tesla hit about 260/270 with their latest power walls. Even so, I still have to doubt they'll reach more than 200km range with current technology.

Their mission is to release to the public in 2025 however, which makes the goal much more likely to be hit with improvements in technology.


It's the first point they mention on the landing page:

- 300 km range - Travel from London to Paris in one hour.


Yes, the could have written anything there. What matters is what the weight of the combination is to achieve that range (even in theory). It takes a lot of energy to fly a plane with aerodynamics such as this one, far more than your typical glider and batteries are very very heavy. Also there wasn't anybody on board.

If this whole contraption weighed more than 100 KG for the demo I'd already be very impressed, even more so if 80% of that wasn't battery weight and if it could stay aloft for more than the one minute demo.

This is not so simple.


That's what they claim they will do, not what they've demonstrated. This seems to be a modernized version of Moller International's "just around the corner" hype of the last, what, 30 or so years, only Lilium's hype seems more oriented toward potential investors/acquirers rather than selling pre-orders to aspiring individual owners of flying cars that'll never be commercially viable.


Aka investor bait. It's been awfully quiet around Moller since 2010 or so, maybe that's why this company is able to do what it does. Normally you'd be sent home to do your homework if you came up with a battery powered VTOL with short stubby wings.

The first and only interesting problem these guys should solve is how they are going to power it. Everything else can wait until then.


Well, there's an interesting, tangentially related bit in their March 2017 newsletter [0]

---

Moller International has received a number of emails from newsletter subscribers who have expressed concern that MI’s lead in VTOL capable flying cars is being upstaged by companies like Airbus, Ehang, Embraer, Google, and Joby. Nothing that is presently contemplated as a battery powered flying car is a threat to the technology that has been developed by MI.

---

Incidentally, it it wrong that a kind of want a Moller v. Lilium PR battle to be a thing?

[0] http://us2.campaign-archive1.com/?u=84c20a8ab4539585a29aaaa5...


That will only happen if they see each other as going for the same sources of funds. The one is in .de the other in the United States to I doubt that that would happen.

Even so, it would be hilarious. Maybe we can set them up? Mail them both saying you're ready to pre-order but are also looking at the other?

Incredible that Moller is still going at it and that he still manages to get more money. Elizabeth Holmes could learn a thing or two from Moller.


Imperial (A UK University) are currently making large leaps forward in terms of modelling a room or environment with just a single camera and little computing power. Their basic methodology was:

- Pick "unique" points currently in view

- Track how they move as the camera moves

- Combine this data with the accelerometer to get an accurate movement reading of the phone.

- You can get the depth of any point by comparing two images and knowing the change in user position.

Simple algorithm, but their results were astonishingly good. Snap don't need to model the entire room, they just need to work out where these points are to keep the image appearing still.


I swear Electron is incredibly fast in every instance I've used it. Unless you're doing heavy computation with complex algorithms, it's really nice. I don't believe VS Code has to do any computationally expensive computations, and it enables many more people to work on a project (JavaScript being a truly universal language).

I'm currently working on a Mail application and my benchmark test case is an email account with 20,000 emails to be displayed. Noting we currently have no lazy loading and all these are loaded from an on-disk database and put inside their own Shadow DOM it truly did surprise me when it took just over 400ms for an entire page load on my development laptop (~3 years old with an i5, nothing special).

Even if you need to do complex algorithms, just write that particular part in C and use it in the rest of your JS code like normal.


> I swear Electron is incredibly fast in every instance I've used it

That's really surprising to me, I've had completely the opposite experience. Just resizing a window shows how slow it is to repaint and scrolling is often pretty poor too, and this is on a moderately powerful desktop machine. I really hate to think how it'd cope on an embedded device with an ARM processor. There's a marked difference to me between Atom and e.g. Sublime Text.

> I'm currently working on a Mail application and my benchmark test case is an email account with 20,000 emails to be displayed.

Is this benchmark open source? I'd be really interested to try it compared to something similar build in Qt or Gtk+.

> Even if you need to do complex algorithms, just write that particular part in C and use it in the rest of your JS code like normal.

This might speed up complex background processing, but the fact remains that the UI is built on the DOM with interactions handled by JS. It's just not ideal for a desktop application.

With all that said, I totally understand why a lot of applications are built with these sort of technologies. It's much easier to build a cross-platform application than anything else. The price you pay is that the UX is poor.


Don't let Atom be your electron example, it is actually slow. Try VS Code instead.

- signed, former Atom user.


Faster how?

In my experience, VSC's cold boot startup time is just as bad as Atom, and it's not immune to interface jank - for example, when unfolding the "tree view" on one of my projects, I can actually see the folders being painted top to bottom.

But I don't use VSC day-in-day-out (since it's missing multiple project root folder support), so perhaps I'm missing something.


> Is this benchmark open source? I'd be really interested to try it compared to something similar build in Qt or Gtk+.

Afraid it's not, currently the only working copy is closed-source whilst I work on rewriting it all into an open source repository. Having said that, I'm in no doubt that Qt or Gtk+ would be significantly faster than rendering what is essentially a local website.

> I really hate to think how it'd cope on an embedded device with an ARM processor. There's a marked difference to me between Atom and e.g. Sublime Text.

Reminds me of a joke told by a friend of mine:

Friend: "The thing I like about Atom is that it encourages you to write small files."

Me: "How does it do that?"

Friend: "Well, it crashes if you write more than a couple hundred lines."

Personally, I too do use Sublime Text. Not entirely sure how Atom is so slow, but it takes ten times longer to startup and five times longer to search through my files. Definitely caused me to switch away from it.

> but the fact remains that the UI is built on the DOM with interactions handled by JS

Excellent point, but I think if you're doing complex interactions that require a large amount of computing time then you shouldn't be using a program like Electron. For something like a mail application, not much changes. We actually never remove mail items, simply hide them from view, so we have one large render time at start-up and then almost none after that (displaying and hiding an element from view is incredible efficient). For something like atom, you have a significant amount more happening and you want just about everything to be instantaneous. Plus the fact that it's expected to be able to deal with large files consistantly without any hiccups.


Of course electron is fast, it builds on years of work making v8 fast.

My main issue is battery life. I just need to open one electron app (Discord, Slack, Spotify even though that's CEF, but close enough) to have powertop drop from 10 hours expected battery life to 3-4.


That means it isn't fast. Your CPU is fast and electron is making it consume an insane amount of resources to do trivial things. If it were truly fast, it would not affect your battery life.


Hey, have you used Powertop's --auto-tune flag? Is there any way you know of to selectively enable flags. I don't like that my disks spin up and park too frequently because I have some spinning disks too.


I'm sorry, what's CEF? I haven't come across that acronym before.


Chromium Embedded Framework


startup, memory use, and cpu use are typically the issues.

Of course, atom is a specific case that lagged behind vscode for a while. I have no idea whether that's still true.


Considering how we're doing now? Yeah, I'm pretty cool with how they handled things.


This site is actually really awesome, and has worked for every website I've tried! My only slight issue with it however is it took me a few minutes to actually work out what "HTML codes" were and even then it was only from watching the video. Have you considered renaming it to something like "HTML Source Code"? It also seems to struggle on web pages it can't find tables, such as the following website I made which contains no information:

https://hastebin.com/eguluvoquq.html


I appreciate your hard testing and feedback. Your suggestion is very good to me.

Actually, any (partial or full) HTML source code is available; <div></div>, <p></p>, <span></span>,<html></html>, and etc. Following to your advice, I changed the placeholder description to "any HTML Source Code".

Secondly, my server returns 500 error only if there is nothing to extract such as your code. I will fix it soon. Thank you.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: