I'm sure many of you remember the Monkey selfie from a few years ago... meh!
I'm also sure many of you understand the farther reaching implications of this ruling, especially how it relates to software code written by AI. All that code written by AI cannot be licensed as anything besides public domain. Just think of all the code people have checked into git, that they did not write! Next, please consider the implications towards the open source community if ever there is controversy about Linux kernel code that was AI generated, and then suddenly cannot be covered by the GPL. I think the neck-beard people over at NetBSD can sometimes be eccentric about many things, but this topic was deserved when they loudly banned all AI generated code from their repos.
Yeah the existing managers left behind will probably be overloaded, because one person cannot scale over so many direct reports. So then perhaps Amazon has figured out how to scale middle managers so they can effectively manage multiples more. Perhaps an AI/ML tool of some kind, which would seem kinda dystopian, but might not be awful... who knows, this is just wild speculation.
I was at a certain open source company IBM acquired, and it was certainly an interesting experience. I certainly don't harbor any overtly negative feelings towards IBM. I used to have negative feelings about IBM many years ago, as a customer in my former life as a sysadmin, many decades ago. However being under the IBM umbrellas was alright. Yall are going to be fine!
The natural gas situation is still sketchy because the incentive to build more capacity is very low, but the natural gas alongside coal electricity generation is know as baseband power. Very reliable inertial generators (turbines) smooth over the unstable electrical from wind turbines or solar.
As far as battery capacity goes, that's also sketchy because ERCOT is allowing dangerous risky lithium packs to be installed in a tightly packed together industrial space optimized way that could result in disaster when one unit catches fire, spreading to the whole site. Better would be magnetically suspended flywheels in a hard vacuum buried under ground. These are more ecological and have the same baseband quality as a turbine constant spin generator, and require very little energy to up-keep. I would say a hybrid way of having these flywheels in large numbers with on-site lithium batteries to provide site power, and line-leveling as the relays switch over to the flywheels, similar to how batteries in a data center last just long enough to let the diesel gen spin up...
The problem is if all these renewable eco friendly sources crowd out the dirty more reliable sources, then there will eventually not be any reliable sources. It's a nasty paradox.
> As far as battery capacity goes, that's also sketchy because ERCOT is allowing dangerous risky lithium packs to be installed in a tightly packed together industrial space optimized way that could result in disaster when one unit catches fire, spreading to the whole site.
Completely false. There have been fires at in battery storage systems. They have never, ever escaped the containment of the metal unit that comprises a given module. There is really no mechanism for a fire to spread either.
> because the incentive to build more capacity is very low,
The incentive for new gas generation is the same as for new solar generation: selling electricity profitably on the grid. If you are saying that new gas generation has trouble competing with batteries and solar, I would agree.
I would go further and say that new solar and batteries are cheaper than merely the fuel and operating costs of many existing fossil fuel plants. Especially coal. And a lot of gas too, particularly the peaker plants that burn for short amounts of time to capture price spikes on the market.
> Very reliable inertial generators (turbines) smooth over the unstable electrical from wind turbines or solar.
There's two very different concepts here: inertial generators for frequency regulation and supplying reactive power, which solar does not generate natively, but is now with grid-forming inverters. Also, batteries have been serving frequency regulation for more than a decade, starting in the PJM market. The need for inertial generation has been replaced even with very old and expensive battery technology.
The second concept of "reliable" is dispatchable power: can you put power on the grid when it's needed? Batteries also solve this, and are being deployed, profitably, in Texas, as opposed to new gas generation.
> As far as battery capacity goes, that's also sketchy because ERCOT is allowing dangerous risky lithium packs to be installed in a tightly packed together industrial space optimized way that could result in disaster when one unit catches fire, spreading to the whole site.
That's a fairly minor implementation detail, and the installers are taking on all that risk on their own, as they will lose everything if there's a fire that spreads.
A battery going up in smoke is still far better than the amounts of natural gas that get burned in its place.
> Better would be magnetically suspended flywheels in a hard vacuum buried under ground. These are more ecological and have the same baseband quality as a turbine constant spin generator, and require very little energy to up-keep. I would say a hybrid way of having these flywheels in large numbers with on-site lithium batteries to provide site power, and line-leveling as the relays switch over to the flywheels, similar to how batteries in a data center last just long enough to let the diesel gen spin up...
Flywheels are a completely impractical technology that doesn't scale and is far too expensive. Especially in vacuum underground.
Inverters are a far better solution than fly-wheels: cheaper, more battle-tested, actually deployed in practice instead of just in the lab, and widespread.
Vistra energy built a battery storage plant in Moss Landing, Ca that recently went up in flames completely destroying the entire plant and dumping toxic chemicals into the local estuary. Ask the residents there if they thought it was “better than burning ng.” They are trying to build another one a hundred miles south in Morro bay.
There are good ways to do this but using small communities in another state as R&D isn’t one of them.
I'm one of the residents nearby that's full of concern. Ask me about the people in Moss Landing that had to repaint so often because of the natural gas plant that got replaced by the batteries.
People are full of environmental concern for batteries (good!) but rarely question the status quo.
We don't yet know the impact of the Vistra disaster, but it's being monitored closely and we will find out. Freaking out before there is any reason to freak out is a specialty in my area. Interesting examples are WiFi sensitive people that have no problem with standing in the sun. Or the X-ray technician that is concerned about cell phone towers causing cancer.
If there are any negative effects from the fire, they will be found, but they haven't been found yet.
> Moss Landing, Ca that recently went up in flames completely destroying the entire plant and dumping toxic chemicals into the local estuary.
While the Moss Landing BESS does seem to have been one of the worst built anywhere on the planet, the recent fire there did not (a) "completely destroy the entire plant" nor (b) "dumping toxic chemicals into the local estuary".
When a modern vehicle (let alone a manufactured house) goes up in flames (which happens fairly often), it releases significantly more toxic gases than a BESS fire.
These corrections bought to you by your friendly not-in-your-neighborhood IFSAC Firefighter II.
You did raise a good point about latency sensitive brokers, and I feel like that is becoming a self-selecting cartel based around the NY/NJ area. So when you have these folks talking about moving to places like Houston or Dallas, in an implied or entailed way they are talking about breaking that latency cartel. Honestly, just by stupid personal opinion, the SEC should grow the courage to ban low-latency based trading. This topic has been beat to death over the years, so I'm not adding anything to the discussion, just chiming in to say the obvious things... have a nice day.
Just for the record, the latency arb / microwave networks speed game is basically dead as of 2018-2021. Looks at Virtu, formerly classic examples of the trade and now almost entirely "switched sides" and doing order execution services for the same big banks whose lunch they were formerly eating.
Furthermore, the wireless stuff is commoditized at this point. You can just rent to be on the wireless that Apsara (et al) offer, and while some have private networks, there's not enough money left in the trade (see above) to be worth it if you don't already have one.
This is combined with liquidity moving away from public exchanges (both the lits and darks) towards being matched internally/by a partner (PFOF matching), which is purely a win for retail traders and is its own force that isn't going away. (Go on robinhood and buy 2 shares of SPY. It fills instantly. People love that. You can't just go get 2 shares of SPY off the lits, so where dyou think those are coming from?)
Traditional HFT is dead. The only extent any of the firms are still alive is the extent to which they've moved on to other trades, many of which are so much less latency sensitive that the microwave edge doesn't really give you enough alpha to be worth it.
(I worked for a firm for a long time that didnt move on to other trades... so I'm quite familiar with the scene.)
Exchange trading happens in round lots that are usually 100 shares.
This is pretty much just a legacy thing, but so many technical systems have this assumption built in that while odd-lot trading (trades not in the round lot size) has become a little more common on the exchanges, it’s still treated weirdly by the various systems involved.
But also, it’s better for you as a retail investor, to get them from a middleman, because they will generally give you a better price than the exchange. They will give you a better price because retail traders tend on average to be worse at trading than the overall market. You should take advantage of that, regardless of your actual ability level.
Odd lots don't contribute to the NBBO, and placing an order for an odd lot doesn't have to execute within the NBBO. (People can trade "past" you, I am pretty sure ISO's don't need to clear you, etc). (Note these are rules for market participants, not retail customers). So for a firm trying to argue they provide excellent price improvement and execution efficiency for their customers, they can't "just" send the orders to the lits.
And even if they could "just" do so, internal matching typically provides better price improvement on the NBBO than even the best execution you could get off the lits.
Edit: But yes TBC, you're correct that odd lot trades aren't unusual. But you're seeing trades there by actual market participants, not retail orders. They're not just trying to get those 2 shares, there's a broader strategy and they're aware of all the above nitty gritty.
In example 3, the NBBO for stock ABC is 495--500, but there is also an odd lot offer for 497 on exchange. If a Robinhood customer sends a market buy order, then Citadel is allowed to fill it for 499.999 even though it's better to send to the exchange. (And if they then pick up the odd lot themselves, it's easy arbitrage.)
By the way, while you're correct about some of your claims, odd lot executions definitely have to occur within the NBBO. (How could it be otherwise?) Otherwise, in the example above, Citadel would give an even worse price!
I mostly mean scenarios where your limit order might not be marketable, end up resting on the book, and then get traded "through". I'm speaking from the perspective of an actual direct market participant, where you're not using a market order but are trying to enter a position while adding liquidity/collecting a rebate. (Most exchanges reward participants who have some % of their trades as liquidity-added, with rebate tiers).
Round lots are excluded from the NBBO so that the NBBO can't be as easily influenced by quantities of shares that don't represent any material price signal. 1 share of practically anything but BRK class A represents ~nothing. Less than a round lot on a price level is basically no liquidity available at that level.
There are per ticker rules to allow odd lots on most US markets. AFAIK unless you're trading penny stocks, every stock out there is entitled for odd lots, and most trades are indeed odd lots, that has been the case for 10 years at least.
Even if there wasn't, I guess at least half the trading on stocks is through CFDs and not cash, so lots aren't even a thing for most investors.
In the current political environment, I don't see SEC (or any other gov't agency) growing courage anytime soon. Well, other than DOGE acting like an energy vampire growing stronger off of its victims.
> Honestly, just by stupid personal opinion, the SEC should grow the courage to ban low-latency based trading.
I think that's really just a matter of the media giving bad press to HFTs "because it's scary". The boring reality is that not much people care, and HFTs are really not that important on the grand scale of things. We're talking about maybe 4/5 firms worldwide making single to low double digits billions in P&L, from an activity that is most likely overall positive, or say net 0 if you're a bit cynical. Good for them.
Honestly, that's not even a peanut compared to what more typical finance institutions manage and earn.
Your typical institutional investor (pension funds, insurance company, fund of fund, bank, etc) manages in the 100s to 1000 billions. Each.
The whole HFT industry probably makes what a single institutional investor earns by buying US debt at 1%.
The HFT industry really is just a small microcosm, it just so happens that it triggers dreams and fantaisies in the public mind.
> If low latency trading was banned real humans could compete for that money.
But that's what we had before, and was it better ? I don't think having 1000s of trader monkeys buying and selling while refreshing their price feeds or shouting in a pit is any better.
At the end of the day, as long as there will be market inefficiencies, there will be arbitragers. I don't see the point of kicking those arbitraging at 1us to replace them with people arbitraging at 1s or 1m.
For a while at least HFC was able to pay a lot of money to some really smart people to do some really weird high performance computing projects. Sure to the industry the amount of money they has wasn't even a peanut, but to a normal person on the street it was still a lot of money, and even to the financial industry it was still worth (for a while) paying high salaries to the very best for that fraction of a peanut.
> HFT & payment for order flow is what has made stock trading the low fee environment it is today.
I get how payment for order flow would help enable this current low upfront fee trading system we have today, they're managing to get their money from places other than direct fees. I don't exactly get how HFT also makes it low cost. Could you further explain that? Is it that mostly the people paying for the order flow is pretty much exclusively HFTs, and if they didn't exist the order flow market wouldn't exist?
Making up numbers here, if the HFTs manage to squeeze a dollar of profit out of the order flow data after buying my trade data for a dollar (two dollars of spread they manage to find), is that really better than me paying a dollar or two in fees for that order? It would be interesting to see the real values in question here on such things to actually gauge what is better for an average trader now trading in the low to zero fee trade market.
Spreads on bid/ask used to literally be at least 25cents, they didn't even use decimals. Would you rather pay 1 guy in a funny jacket with a palm pilot in NY 25 penny or a bunch of computer nerds sniping each other 1 penny?
Let's say you buy $10k of some $50 stock today and decide to sell tomorrow.
In the old days you'd have paid say $10 to your broker to buy, and $10 again to sell. Your bid-ask spread in isolation of any price changes in the stock would be 25cents per share x ($10k / $50 = 200 shares) = another $50 in spread. So you're all-in transaction costs would have been $70.
Now you probably have a no-fee brokerage, and generally a penny spread.
So same formula is 1cent per share x (200 shares) = $2 in spread + $0 in fees. So you're all-in transaction costs would be $2.
$2 vs $70 on $10k round trip investment. 2bps vs 70bps.
> I don't exactly get how HFT also makes it low cost
Because most HFT firms are also market makers. You can see them basically as middlemen that are mandated by the exchange to provide liquidity on both bid and ask by the market. These liquidity mandates reduce the spread for other traders, and in exchange market makers have lower, or even positive fees (i.e. they are _paid_ to trade).
Usually, market makers use these rebates to earn money by taking a passive order risk on behalf of an aggressive order from a flow they bought.
Think of it that way:
You're an exchange, you want people to trade on your platform, that's how you earn money.
For people to trade on your platform, you need liquidity, actual shares to buy and sell. So you invite market makers on your platform, and sign a contract with them, along the lines of "you have 0 trading fee but in exchange you need to provide $X of liquidity on bid and ask at any time and ensure a spread <Ybps".
Market makers accepting to on-board now have to somehow make a living while providing liquidity, but this is a risky business, because they are basically market making for people that are _more_ informed than them (they have adverse selection by design), and they have to respect their mandate of providing liquidity. That is, if a stock goes down, and people start selling it, the market maker still need to provide liquidity for sellers and buyers, which means maybe he will have to actually buy these shares that are tanking.
Usually the pure market making mandate is close to 0 profit, unless you spice it up with some other strategy. Taking passive order risk, netting order flow, maybe short term technical alpha, etc.
You can think of market makers and HFT basically as the same people. If you trade at high frequency, you're playing on micro changes in price, there's only so much a stock price can realistically move in 1s. HFT is only viable if you have very low, or no transaction cost. That's why there's a natural overlap between HFTs and MM.
And as a refresher for why HFT exists - it's a side effect of RegNMS. RegNMS exists because the guys in funny jackets in NY with palm pilots were ripping people off.
There was no consolidated tape and obligation for exchanges to route your order for Best Execution. There was no National Best Bid Best Offer.
There was just whatever price the exchange your broker sent your order to filled you at.
Some exchanges cough NASDAQ cough used to do things like display sub-penny quotes even though they only filled at full penny increments. So they could attract flow advertising prices they wouldn't file you at. See Rule 612.
No sure why this got downvoted, and I'm not sure what else people think "humans competing for that money" would look like?
You're gonna need to physically collocate those people if you are trying to ban computers and latency based trading. Possibly in a "Pit" maybe in a building called an "Exchange" in places with a lot of financial services people like say NY or Chicago. Probably need to have some sort of membership/license requirement due to finite space. I dunno. Sounds like a novel concept that's never been tried.
Before the internet? Things are a bit different now. HFT monopolized the market maker/arbitrage money with millisecond executions that nobody can compete with.
You said "If low latency trading was banned real humans could compete for that money."
As soon as you re-introduce distance, latency becomes a factor again. How do you eliminate "low latency trading" and prioritize "real humans" without putting them in the same room?
What do you actually propose here?
One way to reduce the impact of latency is to do away with continuous trading and move to frequent but discrete auctions. But this would just increase volatility.
Imagine if every X minutes / hours stocks moved Y% like they do at market open, as all the information that was disseminated since the last auction was re-priced in.
If anything the long term trend has been towards longer continuous trading sessions to reduce those types of jumps.
I've wondered if introducing a small but omnipresent random delay in all trading requests might suffice. Something like 0-100 milliseconds. Just enough to moderate some of the advantage that physically co-located automated traders have, while not outright banning it.
>However this “thought police” and “arrested for posting memes” comment that often gets pointed on here is itself a nonsense meme.
Are you for real? These accusations are not merely memes.
While I don't endorse terrible people, it is note worth sometimes awful people are the target of even more awful laws. For example, you can do research into a person named "Adam Smith-Connor" who was literally convicted for standing in public while introspectively praying silently. The conduct of standing while appearing to pray was deemed as a form of illegal protest too near an abortion clinic. The same exact thing happened to another person "Isabel Vaughan-Spruce" who was not convicted.
There are also well documented incidents in the UK involving the prosecution of people making remarks online, which could arguably cross into thought-crime territory. I'll leave it to you to actually research these incidence, Google is your friend.
As usual in these HN threads on the UK, there’s a reasonable point that could be made about whether or not this restriction correctly balances the right to free speech against women’s right to access healthcare. But instead we see a lot of wildly exaggerated talk about “thought crimes”, etc. etc.
> For example, you can do research into a person named "Adam Smith-Connor" who was literally convicted for standing in public while introspectively praying silently. The conduct of standing while appearing to pray was deemed as a form of illegal protest too near an abortion clinic.
Those people are not trying to genuinely prey, but to intimidate women considering or wanting to get an abortion.
> There are also well documented incidents in the UK involving the prosecution of people making remarks online, which could arguably cross into thought-crime territory.
> This is par for the course I guess, and what exhausts folks like marcan. I wouldn't want to work with someone like Ted Tso'o, who clearly has a penchant for flame wars and isn't interested in being truthful.
I am acquainted with Ted via the open source community, we have each other on multiple social media networks, and I think he's a really great person. That said, I also recognize when he gets into flame wars with other people in the open source social circles, and sometimes those other people are also friends or acquaintances.
I can think of many times Ted was overly hyperbolic, but he was ultimately correct. Here is the part of the Linux project I don't like sometimes, which was recently described well in this recent thread. Being correct, or at least being subjectively correct by having extremely persuasive arguments, yet being toxic... is still toxic and unacceptable. There are a bazillion geniuses out there, and being smart is not good enough anymore in the open source world, one has to overcome those toxic "on the spectrum" tendencies or whatever, and be polite while making reasonable points. This policy extends to conduct as well as words written in email/chat threads. Ted is one of those, along side Linus himself, who has in the past indulged into a bit of shady conduct or remarks, but their arguments are usually compelling.
I personally think of these threads in a way related to calculus of infinitesimals, using the "Standard Parts" function to zero away hyperbolic remarks the same way the math function zeros away infinitesimals from real numbers, sorta leaving the real remarks. This is a problem, because it's people like me, arguably the reasonable people, who through our silence enable these kind of behaviours.
I personally think Ted is more right than wrong, most of the time. We do disagree sometimes though, for example Ted hates the new MiB/KiB system of base-2 units, and for whatever reasons like the previous more ambiguous system of confusingly mixed base-10/base-2 units of MB/Mb/mb/KB/Kb/kb... and I totally got his arguments that a new standard makes something confusing already even more confusing, or something like that. Meh...
> Ted hates the new MiB/KiB system of base-2 units, and for whatever reasons like the previous more ambiguous system of confusingly mixed base-10/base-2 units of MB/Mb/mb/KB/Kb/kb
Here's my best argument for the binary prefixes: Say you have a cryptographic cipher algorithm that processes 1 byte per clock cycle. Your CPU is 4 GHz. At what rate can your algorithm process data? It's 4 GB/s, not 4 GiB/s.
This stuff happens in telecom all the time. You have DSL and coaxial network connections quantified in bits per second per hertz. If you have megahertz of bandwidth at your disposal, then you have megabits per second of data transfer - not mebibits per second.
Another one: You buy a 16 GB (real GB) flash drive. You have 16 GiB of RAM. Oops, you can't dump your RAM to flash to hibernate, because 16 GiB > 16 GB so it won't fit.
Clarity is important. The lack of clarity is how hundreds of years ago, every town had their own definition of a pound and a yard, and trade was filled with deception. Or even look at today with the multiple definitions of a ton, and also a US gallon versus a UK gallon. I stand by the fact that overloading kilo- to mean 1024 is the original sin.
> Another one: You buy a 16 GB (real GB) flash drive. You have 16 GiB of RAM. Oops, you can't dump your RAM to flash to hibernate, because 16 GiB > 16 GB so it won't fit.
Right but the problem here is that RAM is produced in different units than storage. It seems strictly worse if your 16GB of RAM doesn't fit in your 16GB of storage because you didn't study the historical marketing practices of these two industries, than if your 16 GiB of RAM doesn't fit in your 16 GB of storage because at least in the second case you have something to tip you off to the fact that they're not using the same units .
> I can think of many times Ted was overly hyperbolic, but he was ultimately correct. Here is the part of the Linux project I don't like sometimes, which was recently described well in this recent thread. Being correct, or at least being subjectively correct by having extremely persuasive arguments, yet being toxic... is still toxic and unacceptable.
I want to say that I am thankful in this world that I am a truly anonymous nobody who writes codes for closed-source mega corp CRUD apps. Being a tech "public figure" (Bryan Cantrill calls it "nerd famous") sounds absolutely awful. Every little thing that you wrote on the Internet in the last 30 years is permanently recorded (!!!), then picked apart by every Tom, Dick, Harry, and Internet rando. My ego could never survive such a beating. And, yet, here we are in 2025, where Ted T'so continues to maintain a small mountain of file system code that makes the Linux world go "brrr".
Hot take: Do you really think you could have done better over a 30 year period? I can only answer for myself: Absolutely fucking not.
I, for one, am deeply thankful for all of Ted's hard work on Linux file systems.
There are plenty of "nerd famous" people who manage it by just not being an asshole. If you're already an asshole being "nerd famous" is going to be rough, yes, but maybe just don't be one?
> Getting more specific, I don't buy the argument that we're getting more conservative.
Agreed. I'm pretty sure normal folks never actually shifted left, not as much as the far-left ideology people imagined. Folks would use any plausible excuse to end the insanity progressive politics has caused. Mind you the reverse is also true of ultra conservative politics. The world is elastic in this sense, and we see corrections from time to time.
I'm also sure many of you understand the farther reaching implications of this ruling, especially how it relates to software code written by AI. All that code written by AI cannot be licensed as anything besides public domain. Just think of all the code people have checked into git, that they did not write! Next, please consider the implications towards the open source community if ever there is controversy about Linux kernel code that was AI generated, and then suddenly cannot be covered by the GPL. I think the neck-beard people over at NetBSD can sometimes be eccentric about many things, but this topic was deserved when they loudly banned all AI generated code from their repos.