Hacker Newsnew | past | comments | ask | show | jobs | submit | xmcqdpt2's commentslogin

On the other hand, I've found that agentic tools are basically useless if they have to ask for every single thing. I think it makes the most sense to just sandbox the agentic environment completely (including disallowing remote access from within build tools, pulling dependencies from a controlled repository only). If the agent needs to look up docs or code, it will have to do so from the code and docs that are in the project.

The entire value proposition of agentic AI is doing multiple steps, some of which involve tool use, between user interactions. If there’s a user interaction at every turn, you are essentially not doing agentic AI anymore.

If the entire value proposition doesn’t work without critical security implications, maybe it’s a bad plan.

They are worse than people: LLM combine toddler level critical thinking with intern level technical skills, and read much much faster than any person can.

Right. But my point is, they belong to the bucket labeled "people", not the one labeled "software", for purpose of system design.

That analysis only makes sense if companies value AI tools as much as equivalent human productivity. Hypothetically say you have a company with 100 junior developers. An AI service comes around that doubles the productivity of your junior developers, so you can keep only 50 of them. Would the company pay 5M dollars a year forever for that service?

In my experience, the answer is a resounding no. They’ll nickel and dime some kind of per-seat licensing on a monthly basis that costs less than 100 USD or whatever. So for every 100$ in salary you can automate away, you might get 2$ in subscription payments if you are lucky, at current rates.


As a result of Executive Order 14203, titled “Imposing Sanctions on the International Criminal Court”

https://en.wikipedia.org/wiki/Executive_Order_14203


That’s sanctions evasion and those companies will be very wary in providing services to any close family of a sanctioned person. My guess is that these people’s SOs, children and their SOs are similarly banned, and that siblings, parents and “close associates” have to provide way more documentation when opening bank accounts than you and I.

Pretty sure the news coverage would have mentioned that if that was the case.

I found the original source (the letter written by the judge) here

https://www.union-syndicale-magistrats.org/sanctions-america...

In it he specifically mentions that family cannot buy stuff for you because that is a crime if they are in the US or are US nationals, and that your direct family is banned from entering the US (p 4). He does not specifically state whether his family are sanctioned or not but he says that is a risk when he talks in general terms on page 7 about the impacts of the sanctions regime for other judges. Perhaps he is simply not married himself.

Either way, for some reason the news coverage didn’t include these parts of the letter, maybe they didn’t read the whole thing.


I worked on anti money laundering for a Canadian bank in Canada. Our scenarios in 2020 were much stricter than stopping illegal arms trading. We were on the lookout for Iranian-Canadian dual citizens sending Canadian dollar remittances to their Iranian families, which would have invalidated the bank’s status as a money service business in the US (which all Canadian banks require due to our integrated economy!) That is, any transaction in any financial institution in any currency (including eg life insurance, mortgages, paypal, etc) is covered by American sanctions regulations if that financial institution does any business in US dollars.

The legal justification literally is “we put this person on the sanction list because national security.” The sanction process is basically its own legal justification.

> The only real cost is in memory footprint

There are also load and store barriers which add work when accessing objects from the heap. In many cases, adding work in the parallel path is good if it allows you to avoid single-threaded sections, but not in all cases. Single-threaded programs with a lot of reads can be pretty significantly impacted by barriers,

https://rodrigo-bruno.github.io/mentoring/77998-Carlos-Gonca...

The Parallel GC is still useful sometimes!


Sure, but other forms of memory management are costly, too. Even if you allocate everything from the OS upfront and then pool stuff, you still need to spend some computational work on the pool [1]. Working with bounded memory necessarily requires spending at least some CPU on memory management. It's not that the alternative to barriers is zero CPU spent on memory management.

> The Parallel GC is still useful sometimes!

Certainly for batch-processing programs.

BTW, the paper you linked is already at least somewhat out of date, as it's from 2021. The implementation of the GCs in the JDK changes very quickly. The newest GC in the JDK (and one that may be appropriate for a very large portion of programs) didn't even exist back then, and even G1 has changed a lot since. (Many performance evaluations of HotSpot implementation details may be out of date after two years.)

[1]: The cheapest, which is similar in some ways to moving-tracing collectors, especially in how it can convert RAM to CPU, is arenas, but they can have other kinds of costs.


The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes (modulo what the JIT can elide). That performance penalty is absolutely measurable even with the most recent GCs. If you look at the assembly produced for the same code running with ZGC and Parallel, you’ll see that read instructions translate to way more cpu instructions in the former. We were just looking at a bug (in our code) at work this week, on Java 25 that was exposed by the new G1 barrier late expansion.

Different applications will see different overall performance changes (positive or negative) with different GCs. I agree with you that most applications (especially realistic multi threaded ones representative of the kind of work that people do on the JVM) benefit from the amazing GC technology that the JVM brings. It is absolutely not the case however that the only negative impact is on memory footprint.


> The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes

Not on every read and write, but it could be on every load and store of a reference (i.e. reading a reference from the heap to a register or writing a reference from a register to the heap). But what difference does it make where exactly the cost is? What matters is how much CPU is spent on memory management (directly or indirectly) in total and how much latency memory management can add. You are right that the low-latency collectors do use up more CPU overall than a parallel STW collector, but so does manual memory management (unless you use arenas well).


Yes the lack of observability is really the disturbing bit here. You have panics in a bunch of your core infrastructure, you would expect there to be a big red banner on the dashboard that people look at when they first start troubleshooting an incident.

This is also a pretty good example why having stack traces by default is great. That error could have been immediately understood just from a stack trace and a basic exception message.


Does this avoid the dining philosopher deadlock?


yes, 'synchronize' uses a try_lock/backoff algorithm, same as std::scoped_lock.

edit: it could theoretically livelock, but I believe most if not all STM implementations also do not guarantee forward progress.


Purely optimistic STM implementations that abort transactions early and don't permit other transactions to read uncommitted data can guarantee forward progress, and I believe that both Haskell's STM and Fraser and Harris's STM do, though I could easily be mistaken about that.


Probably you are right. I vaguely remembered the "Why Transactional Memory Should Not Be Obstruction-Free" paper, but I might have misunderstood or forgotten what it meant (the implementation can be non obstruction-free, but it doesn't mean it can live-lock).


I'm reading the Kuznetsov and Ravi paper https://www.researchgate.net/publication/272194871_Why_Trans... now; I assume that's the one you mean? Its definition of "obstruction-freedom" is that every transaction "not encountering steps of concurrent transactions" must commit. This seems to be neither necessary nor sufficient for avoiding livelock, but certainly very helpful. Their weaker "progressiveness" property seems almost as good.

They claim that their STM "LP" is not obstruction-free but is wait-free, which is a very strong claim! WP explains, "A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. ‘Non-blocking’ was used as a synonym for ‘lock-free’ in the literature until the introduction of obstruction-freedom in 2003." Kuznetsov and Ravi say of LP, "every transactional operation completes in a wait-free manner."

Its normative force seems to be founded on claims about performance, but it would be very surprising if the performance cost of guaranteed forward progress or obstruction-freedom were too high for me to want to pay it, since what I'm most interested in is latency and fault-tolerance, not parallel speedups.


I need to re-read the paper, but:

>"LP" is not obstruction-free but is wait-free

As far as I know, wait-free is a superset of lock-free and lock-free is a superset of obstruction-free. How can LP be wait-free but not obstruction free?

In any case a wait-free algorithm can't live-lock by definition (progress and fairness of all threads is guaranteed), but the catch is that while the STM runtime itself might have this property, it doesn't necessarily translate to an algorithm implemented in the runtime (which makes sense, you should be able to implement a lock with an STM).

So, yes, the paper is interesting, but probably not relevant for this discussion and I shouldn't have brought it up.

Now the question again remains, do concrete STM implementations actually provide the guarantee you mentioned earlier, i.e. does a transaction aborting guarantees that another succeeds? I would think it doesn't as I think it would be very costly to implement: both transactions might end up aborting when detecting conflict.

Maybe what concrete runtimes actually guarantee is that there is an upper bound on the spurious aborts and restarts as worst case you can fall back on a single global lock for serialization when the upper bound is reached.


> As far as I know, wait-free is a superset of lock-free and lock-free is a superset of obstruction-free. How can LP be wait-free but not obstruction free?

ough! I meant it the other way of course: wait-free is a subset of lock-free which is a subset of obstruction-free.


I'm still struggling with it.

You avoid livelock, as I understand the term in an STM, if the only thing that can prevent a transaction from committing when it tries to commit is some other transaction having committed. That way, forward progress is guaranteed; as long as some transaction commits, you're not livelocked, are you?

I'm not familiar with "obstruction-free"ness; should I be?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: