There are a lot of very distinctive versions of English floating around after the British Empire, Indian newspapers are particularly delightful that way - but there is as the author says, an inherited common educational system dating back to the colonial period, which has probably created a fairly common "educated dialect" abroad, just as it has between all the local accents and dialects back in the motherland.
That's not a very good argument, because then you could say the same for American, Canada, South Africa, Australia and so on. If recency is an issue, then here's a list of colonies that got their freedom around the same time:
Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica, Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta, Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius, Eswatini (Swaziland).
If what you're saying is right then you'd have to admit Jamaican and Barbados English are just the same as Kenyan or Nigerian... but they're not. They're radically different because they're radically different regions. Uganda and Kenya being similar is what I would expect, but not necessarily Nigeria...
>They're radically different because they're radically different regions.
They're radically different predominantly at the street level and everyday usage, but the kind of professional English of journalists, academics and writers that the author of the article was surrounded by is very recognizable.
You can tell an American from an Australian on the beach but in a journal or article in a paper of record that's much more difficult. Higher ed English with its roots in a classical British education you can find all over the globe.
If they optimize though - and this is coming at some point - local AI becomes possible, and their entire business case as a cloud monopoly evaporates. I think they know they're in a race between centralized control, and widespread use and control, and that is what is really driving this.
Yes, if you see the LLM as a compressed dictionary of all available information.
But if they succeed with agentic reasoning models (we are absolutly not there yet) then I think meritocracy will be replaced with assetocracy. The better the model, the more expensive it will be and the better the software will be.
I don’t worry about it myself, but I do worry for my kids. Im not even sure what to teach them anymore to have a shot at early retirement (and they keep raising the retirement age too).
Teach them basic financial literacy. The time value of money, the power of compounding, the relationship between risk and expected returns. Grade school does not cover any of this.
It does not matter what your income is if you cannot budget and save.
Financial literacy is a red herring. If one only stores their savings in gold or an index fund, that gets them practically all the way there. It takes all of two minutes to teach it. It compounds itself.
Risk too is sort of a red herring. Just buy in whenever it dips, and you are set. Diversify just enough to dilute the aggregate risk, and it practically disappears.
Savings are not even possible with low income, only with medium to high income. The lesson to learn is to avoid wasteful excessive spending that benefits oneself only in the moment.
Yes, keep focused on the future, deny the moment. Avoid testing your own experience about what waste and excess mean. Follow the herd and treat participation algorithmically. Buy in. All key points for a satisfying life well lived.
Plenty of people have bought what they thought was the dip, only to watch an instrument go to almost zero and never recover. Look at Bed Bath and Beyond. It’s not quite that simple.
If you buy junk with all of your money, that's on you. I mentioned gold and (broad) index funds, although a few select cryptocurrencies also work. Buying junk must be restricted to a very small amounts only, to what one is willing to risk.
You can spend $100B on a assets but it doesn't mean you'll turn a profit.
Capitalism certainly favors those with the most... capital, but there are quite a few other factors. Market fit, efficiency, etc. The Dutch East India Company had the most assets, yes, but also the best ships and a killer (literally) business model.
The notion of a sector where success is determined almost entirely by who can stockpile the most assets (GPUs in this case) is a somewhat unique situation and probably merits its own term
Well honestly, this security person thinks its a terrible idea - but needless to say the people selling those systems disagree - and for non-technical management, it ticks the compliance box and they get back to their jobs.
It´s not just the legal system. A lot of US Doctors are typically paid on a piece rate basis, and the medical records systems are extremely fragmented, so there is an incentive to order repeat tests (as you get passed around from specialist to specialist), and no incentive to put the systems in to make that unnecessary.
No, it does make sense. Most of the purported growth in government spending is just using raw figures, and not correcting for either inflation or monetary expansion. It is a convenient mistake.
It´s also a lot of assumptions. This probably is an attacker - or wannabe at least. But you could be a student or researcher working on a cyber security course looking and for some projects your search flow would look a lot like this.
They mention in the write up that they correlated certain indicators with what they had seen in other attacks to be reasonably sure they knew this was an active attacker.
The problem to me is that this is the kind of thing you'd expect to see being done by a state intelligence organization with explicitly defined authorities to carry out surveillance of foreign attackers codified in law somewhere. For a private company to carry out a massive surveillance campaign against a target based on their own determination of the target's identity and to then publish all of that is much more legally questionable to me. It's already often ethically and legally murky enough when the state does it; for a private company to do it seems like they're operating well beyond their legal authority. I'd imagine (or hope I guess) that they have a lawyer who they consulted before this campaign as well as before this publication.
Either way, not a great advertisement for your EDR service to show everyone that you're shoulder surfing your customers' employees and potentially posting all that to the internet if you decide they're doing something wrong.
> The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.
The machine was already known to the company as belonging to a threat actor from previous activity
Yes, but only according to the company's own logs, which were not externally validated. To rephrase, the company thinks this was an active attacker based on logs its own tool generates. It does not discount the possibility that the tool generated erroneous logs or identified the wrong machine(s).
That's not very convincing. They still abused trust placed in them - by an active attacker, granted, but still... This seems like a legally risky move and it doesn't inspire trust in Huntress.
Who's trust? Their job is to hunt down and research threat actors. The information gained from this is used to better protect their enterprise customers.
This gains more trust with their customers and breaking trust with ... threat actors?
Threat intelligence is a thing.in fact there’s entire companies that sell just that. In fact, there’s entire government organizations that do just that.
Sure but that's not what their customer was engaging with them to do. It's not ethical to sell "EDR" services and then use that access to spy on your customers for intelligence purposes.
No, it´s really not - it's exactly what they are. Multi-dimensional pattern matching machines, using massive databases put together from resources like stack overflow, Clegg's (every cheaters go to for assignment answers, massive copyright theft etc.). If that wasn´t the case, there wouldn't be jobs right now writing answers to feed into the databases.
And that´s actually quite useful - given that most of this material is paywalled or blocked from search engines. It´s less useful when you look at code examples that mix different versions of python, and have comments referring to figures on the previous page. I´m afraid it becomes very obvious when you look under the hood at the training sets themselves, just how this is all being achieved.
Look into every human’s brain and you’d see the same thing. How many humans can come up with novel, useful patents? How many novel useful patents themselves are just variations of existing tech?
All intelligence is pattern matching, just at different scales. AI is doing the same thing human brains do.
> Look into every human’s brain and you’d see the same thing.
Hard not to respond to that sarcastically. If you take the time to learn anything about neuroscience you'll realise what a profoundly ignorant statement it is.
If that is the case, where are the LLM-controlled robots where LLM is simply given access to bunch of sensors and servos, and learns to control them on its own? And why are jailbreaks a thing?
If tomorrow, all human beings ceased to exist, barring any in-progress operations, LLMs would go silent, and the machinery they run on would eventually stop functioning.
If tomorrow, all LLMs ceased to exist, humans would carry on just fine, and likely build LLMs all over again, next time even better.