Hacker News new | past | comments | ask | show | jobs | submit | firebaze's comments login

I made more money from stocks than I ever thought since Trump became president. Make of it what you want.

The USP of an F35 is that it's a relatively cheap, "stealth" 5th gen fighter jet making use of integrated sensor input.

It also can be shut-off by a presidential order, and relies on cloud-based services, including security updates for its on-device Kubernetes cluster.

If something sounds like a fuck-up in the making, that's it, from the perspective of a country like Greenland.


> it's a relatively cheap, "stealth" 5th gen fighter jet making use of integrated sensor input

Yes, and as I pointed out in my example, it's not something that existing MRCAs are able to provide (it's a bodyframe plus composites issue), but for most buyers (especially in Europe) it never made much sense for the threat landscape they are facing.

Portugal was already on the fence, as they were looking at replacing their legacy F16 Fighting Falcon fleet [0], and an F-35 would have been unneccesary spend for that kind of a use case anyhow - hence why South Korea (Borhame) and Turkiye (Kaan) decided to build their own comparable versions of the F-16, and others such as the UAE and KSA decided to go with the Rafale or Eurofighter.

> relies on cloud-based services, including security updates for its on-device Kubernetes cluster

You don't need to use ALIS if you are using an F35.

For example, Israel chose to roll out their own interconnect (B-Net and IAI+Thales+BEL's Datalink) that they share with France and India, and are using a replacement in the F35 Adir. You never saw Israel complain despite Netanyahu's government publicly despising the Biden admin, because Israel began their own attempts at indigenization after the US-Iran talks began back in Obama 1.

The European countries that do use ALIS are those countries that do not have their own domestic interconnect (Netherlands) or don't want to buy from their French competitors (Germany)

[0] - https://euro-sd.com/2024/03/articles/37091/portugals-compreh...


How can someone asking this question have so many karma points on HN?


Disclosure: 23andme has stored my genetic information since they first started offering genotyping, and I requested data deletion recently although some of my family members haven't and likely won't. Also interviewed and was offered a role at 23andme on their infra team (circa 2010), but declined.

I am not concerned about my genetic data being sold. I am not worried about it being public, it is, through Harvard's Personal Genome Project [1]. If you are going to harm me, you are likely going to use a method far easier than that which would require you have access to my genotyping data. There is also enough overlap with close genetic matches (2nd-4th cousin with hundreds of matches) that if my data is stored despite my deletion request, it would not change the risk assessment. It will take just a bit more legwork to tie a sequence of my DNA to me [2].

Hence my questions to better understand what OP is attempting to defend against. You can't propose mitigations or other recourse (legal and regulatory, primarily, in this case) if you don't know the risk you're attempting to manage, or the threat you're attempting to defend against.

[1] https://pgp.med.harvard.edu/

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

(day job is in risk management)


1429 is not a lot. People ask how somebody who has as much karma as I do can post the things I post and maybe they have a point.


Mail me at (redacted) and I provide his contact address. You may want to give him some tips.

To be honest I may have exaggerated the power outage ratio, as I am not a first-hand observer; but I tend to give some credibility to his description of the living situation over there, esp. as they want to move.

My point is more or less that denying things, even if not fully true, strengthens demagogues like Trump instead of weakening them.


If you're around 75 years of age, and you're living in a quite comfortable, but increasingly problematic accommodation, it's not too easy to move. Right now we're looking for a place for him and his fiancee. Living and housing costs over here are quite heavy.


https://news.ycombinator.com/item?id=42986488

It has been "unflagged" now / update: flagged again. Reason for posting this is that I think quite a few people in the US will have similar stories, which will be shared between the general population. And by denying this, Trump will gain more credibility.

I know this will now be flagged into oblivion.

I don't like politics-related posts here not only because of this. But if some aspect is mentioned, I still like to post some substance instead of just letting it be.


This is so German


I'm using local LLMs for battle-testing an app by impersonating various personae with bad intentions (mostly on the social side, but also ones trying to break the system). Off-the-shelf cloud LLMs refuse such things (probably rightfully so); Llama 3.3 in q8, and qwen 2.5 72b even more so, are surprisingly good for this.

Also, even not-so-capable, smallish LLMs are able to be really good testers even outside the bad persona domain (given a good CLI interface). As long as your energy cost is okay-ish and you already have the hardware, that's quite a good use.


How does that work in practice? Do you connect the LLM up directly to the app somehow, or do you ask it what to do and then do what it says and see what happens?

How much RAM do you need to make those work acceptably well? I assume Llama 3.3 means the 70B model, so you need > 70GB. (So, I guess, a MacBook with 128GB?) In which case I guess you're also using 8 bits for the Qwen model?


We made an adapter (a specific CLI interface) for the LLM to interface with the app. Kind of like an integration test, just a little bit more sophisticated.

The LLM gets a prompt with the CLI commands it may use, and its "personality", and then it does what it does.

On the hardware-side, I personally have 2x 3090 cards on an AMD TR 79x platform with 128GB RAM, which yields around 12 token/sec for LLama 3.3 or Qwen 2.5 72B (Q5_k_m), which is okay (ingestion speed is approx double that)

If you want to know more details, feel free to drop me a message (username at liku dot social)


Ask a LLM to spell "Strawberry" one character per line. Claude's output, for example:

> Here's "strawberry" spelled out one character per line: s t r a w b e r r y

Most LLMs can handle that perfectly. Meaning, they can abstract over tokens into individual characters. Yet, most lack the ability to perform that multi-level inference to count individual 'r's.

From this perspective, I think it's the opposite. Something like the strawberry-tests is a good indicator how far the LLM is able to connect individually easy, but not readily interconnected steps.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: