Hacker Newsnew | past | comments | ask | show | jobs | submit | cntlzw's commentslogin

Absolutely. That is either "Teileinkünfteverfahren" (60% of 1 Million EUR are taxed, 40% are tax-exempt) or "Abgeltungssteuer" (Government takes a 25% cut). Fun fact: there is no tax on winning the lottery in Germany!


Loto is well-known (in France) as the poors' taxes


Only if you lose (not win) :) (irrelevant side node: poker is treated as skill based game in Germany, hence you have to pay taxes on wins)


Donated.


Thanks a lot:)


Can't agree more. Imagine you own a resource that every one on the planet needs. Pretty much everyone on the planet is your customer. You have a monopoly. Little to no competition. You control the prices. No need to pay taxes. You are the government, the police and in some cases even the religion.

Ellison can't beat that.


...here we go again.


> “Logic doesn't deteriorate,” said Bob England, a COBOL developer and president of England Technical Services. “Until the business rules change, that code is as good as the day it was born.”

True, but the business context always changes. I doubt most of the payroll logic from 50 years ago is still in use today. Therefore someone needs to write new code, or change the old code.

> Many COBOL applications are large; rewriting them isn’t cost-effective—or even necessary.

Yes, your IBM sales rep makes sure of it.


I've never bought this one. Surely a rewrite in Java/Kotlin or C# would save you a fortune in the long run.


The time horizon on that payback is very long and the risk of total project failure is very high.


What's the biggest bottleneck?


Figuring out the requirements is a research project unto itself. The churn of people over the years means there is no single source of truth for what the system does or why. I wrote an answer about this at https://softwareengineering.stackexchange.com/questions/4252...

You generally want to improve on what you have in some way (or else why do it?), but the line between the warts and requirements is blurry.

Incremental replacement is often a hard sell on the relationship side of things because the customer is seeing a reduction in functionality until everything is done. You can mitigate it and work around it, but it tends to excite waterfall urges which drives up risk.

You also have the fact that the organization keeps changing, so your target is also changing.

There is also an opportunity cost to the whole venture. All that time you spent rebuilding what you had could have been put into new ventures or back into core business.

If you look into modernization type efforts, you see a lot of failure and a lot of loss. That isn't to say it's impossible or that it isn't ever the right answer, but it definitely isn't safe or easy.


Probably the average tenure of C/VP-level sponsors for such a project.

By the time the gains would be realized, the muck-a-muck will have moved on to a different company. They tend to think in quarters, not years.


Also, sunk cost fallacy. Any reasonably complex COBOL application has lots of lines of code, which are easily counted and in some cases may have even been originally paid for per-line on expensive contracts still on file. That gives a lot of "weight" to that sunk cost on the books, even in timelines of half-centuries. "You want to rewrite a hundred thousand lines of once very expensive code?" you can hear the ghosts of bean counters past shouting in the corridors no matter how much you explain to them that they should have better amortized and depreciated those costs decades ago.


Rewriting software is a massive slog and is prone to all kinds of subtle, possibly very expensive bugs. Especially when the original software has no tests to verify expected behavior in edge cases.


If you hired and trained competent people to do the rewrite, sure. A lot of places will just outsource to a cheap offshore team and be surprised when nothing works.


> True, but the business context always changes.

What about the regulatory context?

I'm thinking of banks, for example.


Regulatory changes in banks kept accelerating in the last 50Y and I see no reason this will stop. It is harder and harder for them to keep up. Reasons for this are:

* Banks still have very old codebases and changing anything is fear inducing for everyone (automated testing is a new things for a lot of teams)

* Banks control very little of their core software. They have a lot of vendors for the same thing. You will see multiple vendors delivering basically the same thing in different parts of the bank, and at the same time multiple versions of those vendor applications in diferent parts of the bank. Integrating this is a nightmare (it can take more than 1Y to move a tag in a XML).

* IT is still mostly used for marketing purposes in banks, it's not a way of doing things. The design/integration is mostly discussed/designed by functional people. They are great, but it does happen that they don't see the full implications of their choices till it is too late. Also, because this is a design 'from above' changing anything takes a lot longer. For any change you have to convince a lot more people till you get to the actual person that will have to do it.

* A lot more complicated management reasons. Nobody wants to take a hit on a new IT project, and few people have the IT & banking knowledge to move such a project (why would you? they pay is average). You cannot progress with just one of them. Places where these projects happen is mostly close to pricing/trading as those are seen as profit centers and the bank is more willing to fail a few times till it can build something decent. As you get closer to the guts of the bank things get more and more...problematic.

Edit: I had to add this. Central banks are also bad at IT, they do not set an example worth following. I consider this the main reason why all our interactions with banks are less than ideal online.


> What about the regulatory context?

It's probably even worse


Tricky problem to implement. I tried to compute requests per seconds from multiple threads. Problem gets much harder if you reach a certain threshold. I think it was round 100 requests per second in my case. After this memory contention becomes a problem. Then I learned about LongAdder in Java. It was a really interesting topic.


He had his time.


Oh wow. I played this ages ago. Congrats to the team. This is dedication.


How do rust crates compare with something like maven or npm? It looks like some issues for example Typosquatting can be done in all of these dependency managers.


Yep, supply chain attacks are a near-universal problem with programing language package managers.

I think there's a lot of room for improvement here. Some good low-hanging fruit IMO would be to:

1. Take steps to make package source code easier to review.

1.1. When applicable, encourage verified builds to ensure package source matches the uploaded package.

1.2. Display the source code on the package manager website, and display a warning next to any links to external source repositories when it can't be verified that the package's source matches what's in that repo.

1.3. Build systems for crowdsourcing review of package source code. Even if I don't trust the package author, if someone I _do_ trust has already reviewed the code then it's probably okay to install.

2. Make package managers expose more information about who exactly you're trusting when you choose to install a particular package.

2.1. List any new authors you're adding to your dependency chain when you install a package.

2.2. Warn when package ownership changes (e.g. new version is signed by a different author than the old one).

Long-term, maybe some kind of sandbox for dependencies could make sense. Lots of dependencies don't need disk or network access. Denying them that would certainly limit the amount of damage they can do if they are compromised, provided the host language makes that level of isolation feasible.


I like all these ideas.

> Long-term, maybe some kind of sandbox for dependencies could make sense. Lots of dependencies don't need disk or network access.

Just like with Android permissions, we could audit the crate sources to list out what functions it uses (out of the standard library or where ever) and provide an indication of that this particular crate is capable of.


For what it's worth, this Principle Of Least Authority / object-capability model is being attempted in the JavaScript ecosystem with SES (Secure ECMAScript).

https://agoric.com/blog/technology/ses-securing-javascript/

https://medium.com/agoric/pola-would-have-prevented-the-even...


This is a strategy, but it typically falls apart against clever attackers who are targeting you specifically. Hackers have been performing return-to-libc attacks forever where they don't actually get to write any code at all, just sequence code that already exists in your binary.

Java also tried this in a slightly more rigorous manner with the SecurityManager and that just ended up being a botch.


Yeah that's why I said it really depends on the host language to make such sandboxing feasible. If you're using a language that lets code write arbitrary data to arbitrary memory locations, implementing a secure sandbox is going to be pretty tricky.


Analysis tools that show where large transitive dependencies could be avoided would help.

Right now there is no feedback to encourage people to not have HUGE lists of dependencies. And for trivial reasons. This compounds the problem hugely.

If you have three dependencies, verifying is feasible. If you have 3,000, it is not.


Maven Central is somewhat resilient against this. In the java world, an artifact is identified by a group-id, an artifact-id and a version, and some technical stuff. The group id is a reversed domain, like org.springframework.

If you want to upload artifacts with the group id "org.springframework", you first have to demonstrate that you own springframework.org via a challenge, usually a TXT record or some other possibilities for github group-ids and such.

It's not entirely bulletproof, because you could squat group-ids "org.spring" or "org.spring.framework" (if you can get that domain). However, once a developer knows the correct group id is "org.springframework", you need additional compromises to upload an artifact "backdoor" there.

Edit - and as I'm currently seeing, PGP signatures are also required by now.


It's a hell of a lot harder to squat namespaces as you need to either spoof or steal or buy one domain per namespace, which is not trivial.

Maven Central has require PGP signatures since the beginning as far as I know! In the olden days, it didn't use HTTPS though (which has been fixed for several years now), so unless you validated the signatures and kept track of the PGP keys, you could still run into trouble.


> It's a hell of a lot harder to squat namespaces as you need to either spoof or steal or buy one domain per namespace, which is not trivial.

This introduces a different security wrinkle, as domain names need to be continually renewed. What does Maven do to prevent unauthorized transfer of namespace ownership when a domain lapses?


That seems to be a very unusual case, but because Maven uses PGP keys, the domain owner would need to ALSO transfer their own PGP keys to the new domain owner, otherwise lib consumers wouldn't automatically (at least) trust their releases under that domain name.


I haven't thought this through at all but are you aware of any package repositories that do something like levenshtein distance between package names maybe combined with a heuristic on common mistyped characters to not allow typosquatting?


Yes, they do that in Dart's pub [1].

They also have the concept of verified publishers[2], which is pretty neat (similar to Maven Central), and keep track of a score for each package (e.g. https://pub.dev/packages/darq/score) including up-to-date dependencies and result of static analysis.

Dart is doing a lot of things right.

[1] https://pub.dev/

[2] https://dart.dev/tools/pub/publishing#verified-publisher


Are there any tools that can scan my dependencies and point out names that are typos of older or more popular packages?

Something like: you said "times", did you mean the older and more popular package "time"?


These do all seem to be things that apply to most package managers of this kind. So it would be good if Rust could find solutions that can be applied more broadly.


npm has some guards for typosquatting. They're annoying when you run into them but I appreciate that they're there. I have no idea how effective or extensive they are, though.


Where would the world be if we just would stop innovating after the first idea that works?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: