I have never worked at Renaissance Technologies, but I do run an HFT firm and am quite familiar with my own techniques as well as techniques of other competing firms, and none of us use anything that could be described as remotely sophisticated.
One thing I tell new quants that I hire is that your job is kind of like a magic trick. To an outside observer the trick looks almost supernatural, but once you understand the trick you realize it's so unbelievably simple and straight forward you wonder how it is you were ever fooled by it.
That said, even if someone tells you how a magic trick is performed and then hands you everything you need to perform it, you are almost certainly going to mess it up. It takes a lot of practice, skill and discipline to pull off even a simple magic trick even after you fully understand it. All of the difficulty of a magic trick is in the execution, not in the idea.
It's the same with quantitative finance, the techniques do not involve anything remotely complex like quantum mechanics and in fact I am almost certain to reject strategies that are overly complex... but even if I revealed how our simplest of trading algorithms work, it's still incredibly difficult to actually execute them. Taking an idea and translating it into a high performance algorithm that is bug free, dealing with networking, collecting and dealing with petabytes of data, having a tight iteration loop, risk management, and most of all, having the creativity to identify something simple among a sea of complexity, those are what make my firm and other firms successful.
It's unbelievably difficult work and most quants do end up failing, but it's not difficult in the way that many people think it is. If anything, most quants I've worked with fail by overcomplicating things and not being able to work from the ground up, off of first principles.
This is exactly what I hear from people who have ran their own funds. In particular, the person who ran a HFT in the late 90s through the early 2010s said he rarely used anything more complex than linear regression with at most two variables. The speed of doing computations also matter here, his fund was eventually squeezed out by bigger guys w/ more racks. I did hear from a person who ran a quant fund with longer holding times that they used models such as Kalman Filter, which require far more computation to estimate values than regression.
Good sources are very hard to find because most of it is absolute trash intended to appeal to a certain audience who think there's money to be made off of reverse Fibonacci patterns or other silly sounding technical indicators that once again, sound technical and fancy but are completely useless.
If you want to know what it's really like to be a quant, review stuff from the ARPM; everyone I hire goes through their 6 day bootcamp but they have other materials as well:
Those are pretty good sources to get an overview of what actual quants at successful firms know.
Quantitative trading is technical, I don't want to give the impression that it's not technical... but it's not "fancy" technical. It's more along the lines of rigorous and iron clad instead of flashy and sophisticated. Every strategy is built up step by meticulous step in precise detail and every step needs to be rigorously justified and experimentally verified.
Generally the thought process starts from the assumption that there is no money to be made on the stock market, either due to perfect efficiency or things like fees eating up any potential profits... when we talk about models, the models we construct describe how the stock market would behave if it were perfectly efficient, ie. free of any arbitrage opportunity.
Then given our model of a perfectly efficient stock market, we simulate what we should expect to observe in such a perfectly efficient market... we then investigate empirically whether these observations happen in reality. Is the market genuinely efficient all day every day across every security.
For some phenomenon it really is, but sometimes the market deviates from the model, so our model is either incorrect or an arbitrage opportunity has presented itself. If an arbitrage opportunity presents itself, we investigate how feasible it is to capture it, things like engineering effort, risk factors, profitability etc...
If all of that works out, then we get to work constructing a state machine for an algorithm to capture that opportunity.
We implement the state machine, write tests for it, run it through our backtester, then run it through our live simulator, and after everything checks out we deploy it live.
Every algorithm is treated like a person, it's given its own human-like name, has its own account, its own set of permissions, capital allocated to it, risk profile, and algorithms are evaluated on a daily basis to reallocate capital to them and modify their risk profile.
Assuming one has an appropriate PhD in math/physics, what other kind of general knowledge questions would you be asking people in an interview? For example, I've read that one should learn modern portfolio theory before going for an interview. Is this sort of thing true, or would that be a waste of time?
Every firm is different and I'm sure many hedge funds would definitely consider that an asset, but for us since we do not invest any capital, we are not a hedge fund, and we are not in anyway in the business of constructing portfolios for long term investing, we don't have much use of knowledge of finance, economics, or trading. When we hire people they all go through a fairly comprehensive 3 month ramp up period where we teach everything one has to know about those subjects from the absolute fundamentals and in a way that is tailored to how our business operates.
Our interview process involves data structures, algorithms, probability and statistics.
For example if I asked you to write an algorithm to generate 4 random uniformly distributed positive integers (that means excluding 0) that sum up to 100, could you do it? That question or a similar one like it will eliminate about 80% of applicants, including those with PhDs, who are unable to produce an even remotely viable solution even after considerable assistance is given. Then you have the remainder who can produce a competent solution but it's not exactly uniform, there's some small bias in the answer but it's good enough and I can usually walk them through it to work out the kinks, and then maybe 5% or less are able to work out a perfect solution.
Some other questions you should be able to handle confidently would be... how many times should I expect to flip a coin before I see heads 3 times in a row?
More challenging questions will be about Markov chains, optimal stopping, random walks. All questions can be solved either by writing an algorithm, or providing a mathematical solution.
Then just go over your resume and ask you technical questions about it... usually I will find something you worked on, you will describe it casually and then I will ask a technical question about some specific area you bring up that you seem interested in.
I'm glad I asked. It's been a very long time since I've done statistics. I'll definitely put more time into refreshing the basics.
Are these questions asked on a computer? As in do you allow the applicants to program it up and show you a working version, or do you have to perform it there on the spot on the blackboard?
Whatever works for the person, especially with COVID all my interviews are done remotely over Zoom so people type it up on their computer or write it on paper and then hold it up to the webcam.
We don't stop algos from competing with one another, that said it is illegal for us as a firm to be both sides of a trade, that is we are buying/selling to ourselves in what is referred to as a wash trade. This can happen if algo A is buying Microsoft at the same time that algo B is selling Microsoft.
We do prevent this by using a fairly standard component called an internalizer. Almost all of our orders go through such an internalizer and if it sees that two orders are matching one another, the internalizer will perform that match directly instead of having it go to the market.
This happens quite frequently in fact, and it's good to be able to catch it beforehand, avoids paying fees, having any kind of market impact, as well as obviously complying with regulations.
The description says they already have a market simulator, which must be a significant piece of software to initially construct. Once you have that, you could just run simulations and observe empirically, if your algorithms end up competing against each others.
Nick Patterson, formerly a senior statistician at Rentech, was on a podcast saying that their most important tool was a simple regression.
I suppose he could’ve been lying, but the believable accounts suggest that the majority of their magic is simply having smart people use simple tools very, very well.
I think this is right - my experience as JS and similar places is that really understanding your data-generating process and the nitty-gritty assumptions of your data/simple model makes a huge difference
E.g knowing what's causing missing values in your data and what implications various fixes on that might have on bias in your linear regressor is probably way way more valuable than fitting some shiny non-linear toy
> Famously, he was sacked by the NSA for giving an interview opposing US involvement in the Vietnam war.
Another cult of personality in business. Bill Gates, Jeff Bezos, Henry Ford, JP Morgan, etc. managed to do a lot without cults of personality.
The more BS I see, the more I question what they are hiding, and the more I question their judgment in distracting their organizations from practical business, and distorting reality.
The famously outperforming fund at Renaissance (Medallion) does not take outside money, and Renaissance itself is a privately held company, so there's not much need to cultivate such a cult.
Honestly, I don't really get your perspective. Simons legitimately was one of the best mathematicians in the world and was sacked from the NSA. This isn't some Musk-esque 'I sleep on the factory floor and sign off on everything despite lacking the bona-fides' nonsense, he was 40 before entering the investment industry and before that won top research prizes, published quality research and ran a math department.
OT, based on what university faculty have told me, so maybe not true everywhere: chairing a department is a thankless political nightmare. It's foisted on people who can't say no; it isn't an honor.
Simon's chairmanship of the Stony Brook math department is an interesting case. He was hired from outside the department to be the chair, rather than 'promoted' internally because he was the last to touch his nose. Stony Brook was a bit of a backwater at the time, but they were willing to spend resources building up a first rate math department. Simons was brought in, in essence, because he had the connections needed to recruit senior and well-respected mathematicians to come to Stony Brook.
What's interesting about Simons and RenTec is that a _lot_ the people involved are genuinely first rate scientists. And I don't mean econ Nobel laureates, like at LTCM, I mean the kind of people who'd have joined the Manhattan project 50 years earlier. That fact, together with the quality of Medallion's returns, makes people wonder if they really found something.
FWIW, I don't believe this myself. But that's where the interest comes from.
Because people are trying to work out what his incredibly secretive hedge fund could be up to? His past is one of the few ways of getting any kind of hint.
lolol. i will bet all of my 401k that they absolutely do not.
i've interviewed at a couple of hedge funds and have lots of friends at various prop shops/hft firms/etc. most of them are using linear models. the ones that have market maker businesses deploy those linear models to fpgas.
people act like these places are spooky magic cauldrons of physics + math + computer science. they're not (but it's definitely in their interest to promote such rumors). what they do have is very very robust data collection/aggregation infrastructure and backtesting systems. the rest is just correctness of execution of strats.
I'll second that. I've done options trading, trend following, and HFT. Often you need smart people to do simple things, but they are still simple things. Gotta remember there's a lot of noise in financial data, and you don't actually have as much of it as people think. Even all the ticks on all the exchanges fit in a few GB of binary per day.
RenTech is especially good at acting like they're from another planet, which I'm sure helps them attract those stellar mathematicians.
Your especially right about the noise in financial data. Nice thing about linear models (vs i.e. random forest, or even worse deep learning) is that they are limited in how they can overfit the noise. They only fit a known function to known regressors.
RenTech is also basically the only largest hedge fund that has beat the market by a wide margin for decades. An average 66% annual return in their Medallion Fund since 1988 according to wikipedia. That is pretty much insane and seems to be something only space aliens could do. If anyone is using quantum computers to help trade, it is probably them.
Medallion is also kept very small because I would guess the tricks that they use to keep those returns so high (maybe even some sort of topological quantum field theory!) don't scale well to larger investments.
Not sure if I would call ~$10 billion very small and making $6-7 billion per year is good chunk of money. But, yea, I'm sure that would get bigger if they thought they could keep their returns high.
That's because it's not 66% but instead $6B. If they had twice as much capital it would be 33%, and if they had ten times as much capital it would be 6.6%. I'm simplifying to show the main point.
That's actually been surprising to me but makes sense: "the entire history of the stock market" is not that big in terms of data, and the dynamics change frequently enough to where things that happened in 2000 are almost useless from a machine learning standpoint, my models perform best when trained on a much more selective subset (I believe because the reasons for trades and market participants keep changing, so the dynamics change enough to where old rules don't apply anymore).
Can you distinguish noise from non-linear correlations? I feel like you can never prove the absence of non-linear correlation, but only showing it‘s presence once modelled/detected
> Even all the ticks on all the exchanges fit in a few GB of binary per day.
This isn’t true once you include equity options, which have several orders of magnitude more data.
However your general point still stands, because most options trading strategies don’t need such extreme granularity of data. Much of it can be ignored, or close to it.
You're right overall that most of these places are a lot less flashy on the inside (have worked at a couple). Linear models everywhere as you said
However I think you're underestimating RenTech here. They're genuinely just in another league compared to what most people consider the "elite" quant shops. You're not getting in unless you're an actually impressive academic with a track-record, so I wouldn't be surprised if they're trying some weird stuff that other shops can't even understand (although I would imagine in small size vs. more vanilla stuff).
IMO places like JS/CitSec do a really good job of bombarding campuses to inculcate this idea they're the absolute apex of mathematical wizardry, but the places that are really doing some dark magic shit aren't trying to get undergrads to apply to them. Ofc maybe I'm falling for the same kind of propaganda for RenTech
>However I think you're underestimating RenTech here. They're genuinely just in another league compared to what most people consider the "elite" quant shops.
People said the same thing about LTCM. They had multiple Nobel Prize winners on staff, literally the people who wrote the Economics and Finance textbooks.
Then a decade later, history repeated with all of the prop trading outfits doing securitization.
It's always the same ingredients: 1) a theoretically sound strategy for taking advantage of some arbitrage opportunity 2) the assumption that positions can actually be liquidated on demand at the prevailing price and 3) enormous amounts of leverage. Then something happens that wasn't accounted for (e.g., sovereign default, counterparty default, etc.) and suddenly the strategy is no longer sound ("in a crisis all correlations go to one"), the liquidity assumption is no longer true ("where are all the buyers?"), and the leverage puts you out of business ("the market can stay irrational longer than you can stay solvent"). People never learn.
RenTech was founded years before LTCM and is still going strong. It's not repetition of history, they've outlasted or are older than most of the trading desks ever. They're basically a business providing a service to other market participants and executing that incredibly well, it's quite unlike the arbitrage LTCM was involved with.
That was a scam. Are you accusing RenTech of being dangerously overextended like LTCM, or a scam like Madoff? Considering the Medallion fund takes no outsider capital, that would be quite a strange way to run a Ponzi scheme.
All of this wizardry operates under the premise that given just the right meta-meta-meta-formulae some tiny edge over randomness can be squeezed from the mountain of historical data by sheer willpower and brute force computation. It cannot. The sooner people accept this as an iron rule, the sooner they'll stop falling for scams that promise to foretell the future.
But it obviously can. The returns of Renaissance’s Medallion fund cannot be explained by randomness. If every company in America was a hedge fund since 1776, the returns of Medallion would not arise by chance. They clearly have an edge (and don’t even accept outsider money, so no need to falsify).
I think your definitely idolizing them a bit too much. Maybe a bit ahead of the curve but not doing anything super fancy (although I guess those two statements are somewhat contradictory).
Just from snippets I've sort of heard/read about, I think they were one of the earlier ones to move into HFT (although maybe not the super fast infrastructure heavy type these days). In some interview Simmons said they realized returns became more predictable they shorter the time frame they looked at and they pushed it to the extreme. I think there is also reason to suspect that they may have adopted some NLP strategies early on as well since Mercer was involved in that or something, and they initially hired a bulk of their team from IBMs NLP research team. Also they did not dodge 2008 completely, in some interview they said they lost close to / more than half their portfolio value in the market crash, but because they didn't have stupid leverage or outside investors or something like that, and also because they trusted their models, they didn't sell and held on. So maybe just slightly better execution but mostly the same.
Anyways, I was reading about these guys back in 09 when quant trading wasn't so blown up. Now every kid whose decent at math seems to want to be the next renaissance, which just makes me feel like the best years for that are over.
I’m not idolising them, I’m just framing their returns in the correct perspective. The probability of any of the top firms existing by chance is astronomically small, that’s all I’m saying. Same is true of BlueCrest etc.
As far as I know the NLP stuff is more to do with similar techniques being applied to market data, rather than actual speech recognition or whatever. Hidden Markov models and the like.
Medallion distributes their earnings and stays a fixed size rather than compounding, so it's a category error to compare their returns to most hedge funds. (At 66% return for 30 years, they'd own everything in the world otherwise). They're more like an internal prop-trading firm, which makes their returns good but not insane.
I’m well aware it’s not compounded, it’s still a worthwhile comparison. We’re comparing ability to capture alpha, they’re clearly among the best in the world at that. 66% is insane for 30 years as a prop desk even with a fixed capacity, what makes you say it’s just good? Who is doing better?
They are well known for insider trading and other market abuses. Of course they'd rather everyone believed it was all the PhDs they've hired, but it's just a smokescreen.
That’s neither insider trading nor market abuse. They did something in a grey area regarding taxes, then decided to pay the bill rather than fighting in court to determine if it was or wasn’t legal.
They are the best at extracting information from publicly available data at fast speed, no more than that.
If you hire very intelligent people to do just that, you are doing it right. But the info is out there for everybody to see. They just arrive earlier than others. How? That is the secret.
Seconded. I think generally people give way too little weight to actually doing things well. There can be some clever mathematical ideas in there here and there, but in the end it has to come together into a running organization that makes little mistakes and can run consistently (while also observing a lot regulations).
For me the "spooky magic" is more the ability to pull this off in size.
Edit: I do have friends on the buy side that use pretty fancy models, but those places are not in the HFT/market making game. But using those models does not take away from needing to be able to translate the edge consistently and efficiently.
No. Most "advanced" ML methods are extremely difficult to get to work in finance. You have a limited dataset that is low signal to noise ratio, highly non-normal and heteroscedastic. Crucially, you can't easily make more data (and no, synthetizing new price data is not an option in most cases). This makes the upfront costs very high, and you then have to prove that you can beat the performance of simple models, and not just by a little. A complicated ML model that offers a 10% better risk/reward ratio (e.g. 2.2 Sharpe instead of 2) is a complete failure, because you traded a simple model that is easy to reason about intuitively for a total blackbox that is very difficult to understand.
Sure, you've got the quintessential marketing induced ML overlay that many firms do, but in all cases I've seen so far it's completely defanged and really there only so that it can serve as a marketing move.
Really depends on time horizon you're talking about. They actually work pretty well for HFT, there's plenty of data around, and most of the information is just in market data - no nasty low frequency stuff to deal with like news, earnings, alternative data, insider trading, butterflies flapping wings in china etc. But the problem is by the time your GPU spits out a datapoint somebody else can go in and trade a few thousand times in the meantime. State of the art on the most heavy competed exchanges is that your fpga (or even asic) with a fiber connected directly to the exchange needs to start sending ethernet/ip headers even before it made up its mind what it wants to send in the payload.
At lower frequencies when the data gets thin and noise/overfitting is a major problem, yeah it makes sense to use simpler models. Bias/variance tradeoff in action.
Fair points. I talked with a couple of HFTs and the general view was that their asymptotic backtests look promising on GPUs but that for most of the liquid markets the latency is way too high - basically confirming what you write.
On niche markets with low liquidity, one doesn't have such tight latency envelopes but those markets also offer more opportunities in general so again there's no real justification to use fancy ML models or GPUs in general.
GPUs are too slow for HFT trading, yes. Deep neural networks in general do work in this domain, people do use them, but the stuff that you can profitably deploy into production is not going to be your vanilla garden variety neural networks, there's a lot of extra engineering required to make it fast enough.
Why not both? Linear models for absolute lowest tick-to-trade latency doesn't preclude you from using fancier stuff at earlier modelling steps. Final linear models you ship to fpga can be mere distillations/triggers
I mean, every model's output is in the eye of the beholder (or modeler). Take any unpredictable wave-like chart and ask someone which direction it's going in next - and guarantee you'll throw $100M on their choice. Hey! You just successfully collapsed a wave! It's funny how after all these millennia, most people still don't understand that someone claiming to divine the future from bloody egg yolks or whatever is just manipulating the future by getting people to believe in their magical divination.
Having worked in the industry for almost 25 years (in options trading), I must comment that the general quality of the article, and the author’s apparent level of understanding of the industry and options are highly questionable.
Do you have a link to any article related to your industry that you found insightful and erudite but still accessible to outsiders of finance? Not complete novices, but maybe an intermediate-level familiarity to the industry.
The linked pdf contains notes explaining HFT circa ~2011. The About section explains it's sourced from the author's blog, and was also used as lecture notes for an undergrad course. I've not read the whole thing, but at a minimum, the brainteasers contain some good nerd-sniping content.
For those wanting more recent additions, the author seems to have more updated thoughts here[1], as linked on the LinkedIn profile to which their former blog redirects.
I'm not looking for anything in particular. I'm at a stage where I know a little about a lot and I don't have the expertise to know what I should dive deeper on. The first thing that comes to mind for you would be great.
I think given that the article was a five minute read intended to introduce the financially literate reader to the new concept of quantum finance that the level of detail was quite reasonable.
To add on to the comment of /u/massinstall outlining the objectively incorrect info, finance has already been investing in quantum computing (from which quantum walks originate and are useful). JP Morgan has had a bunch of people who've put out quality work in quantum computing for options pricing. Goldman probably took heed since Nikitas Stamatopoulos is working there now? Also rumorish, but there are people at ren tec who did work in quantum information science. As to whether they use it? ¯\_(ツ)_/¯
This interference creates a very different probability distribution for the asset’s final price to that generated by the classical model. The bell curve is replaced by a series of peaks and troughs.
-- No, it's not replaced by "a series of peaks and troughs". This is nonsense. It sounds flashy, as it reminds of the peaks and troughs seen in the double-slit experiment, but it does not accurately describe what could be done to improve modeling with probability distributions. Looking at it from an information-theoretic point of view, peaks and troughs in a probability density distribution would just mean lower entropy, i.e. it would be implicitly assumed to contain (quite a lot) more specific information than another, smoother PDF. So where does this information suddenly come from?? If this is not what the author meant, then it is at least an unfavorable choice of wording to write that "the bell curve is replaced by a series of peaks and troughs". To have mercy on the author, one could maybe assume they meant to speak about a characteristic function (https://en.wikipedia.org/wiki/Characteristic_function_(proba...) but that does not seem to be the case.
Furthermore, any probability distribution may be used to model financial instruments, depending on how well it appears to be suited for the purpose of modeling reality. However, if the author already speaks about it so specifically, it is almost misleading not to mention that normal distributions (the bell curve) are in practice not used in the way described, at least not by people who know what they are doing. Consider why Nassim Nicholas Taleb (author of "Fooled by Randomness" and "Black Swan") said that no one in the industry uses Black-Scholes, or ever has. What he was referring to - correctly - is that (in the options space) nobody uses the normal distribution assumption to be correct for the modeling of asset prices per se. It is rather used as a stepping stone with some convenient mathematical properties to describe things analytically.
Broadly speaking, the classical random walk is a better description of how asset prices move. But the quantum walk better explains how investors think about their movements when buying call options [...]
-- Nonsense. There is not even a hint of an explanation why either of the two would be so. It is merely an empty sentence that reads well. Quantum walk explains investor rationale and psychology? And only when buying call options?! This is quite funny actually.
A call option is generally much cheaper than its underlying asset, but gives a big pay-off if the asset’s price jumps.
-- Not always so. It depends on many things. Calls actually consistently disappoint some buyers by moving much less on the way up than what they expected / had hoped for. Having looked at a call's delta as per Black-Scholes, they end up wondering why the call did not move as much as the delta would have predicted. It has to do with spot-vol correlation (and other things), but I won't go down this rabbit hole now... (I would say you can PM me if you are truly interested and want to know more, but it does not seem to be possible on HN.)
The scenarios foremost in the buyer’s mind are not a gentle drift in the price but a large move up (from which they want to benefit) or a big drop (to which they want to limit their exposure).
-- If you are a buyer of a call option you would certainly not hope for a big drop in the underlying (!) and neither would you limit your exposure to such event by buying a call. This is, unless you hedged it either delta-flat or fully, which essentially transforms the call into a synthetic put. Nothing of that sort is mentioned here.
The prices of such options closely match those predicted by an algorithm based on the classical random walk (in part because that is the model most traders accept).
-- No, they do not match a price "predicted" by an algorithm (assuming the author is referring to market prices of the options here). It is the other way around. The assumptions ultimately used to make the algorithm fit the market are what is "predicted" by the market. The "algorithm" referred to here is likely the Black-Scholes formula and it does not predict any market prices. It gives you an idea where the expected value of the option would be if all of its unrealistic assumptions were true (which they aren't). So you have a function with many parameters, one of the most important ones in this context being implied future volatility (average future variance to be super-correct). But you still have to make a choice of what such inputs you want to use for them (the formula will spit out almost anything for the right choice of inputs). In practice, a subset of these parameters differs for each option from strike to strike, so there is no "close" match found to market prices at all.
But a quantum walk, by assigning such options a higher value than the classical model, explains buyers’ preference for them.
-- No. Rubbish. How is this comparison even made. Assigning a higher value, based on what benchmark or standard of comparison? The same input parameters to the pricing model? Hardly, as a quantum model would likely have quite different parameters than a "classical" one. This is just textual fluff.
Such ideas may still sound abstract. But they will soon be physically embodied on trading floors, whether the theory is adopted or not. Quantum computers, which replace the usual zeros and ones with superpositions of the two, are nearing commercial viability and promise faster calculations. Any bank wishing to retain its edge will need to embrace them. Their hardware, meanwhile, makes running quantum-walk models easier than classical ones. One way or another, finance will catch up.
I feel like people should separate RenTech and Medallion.
RenTech funds which are not Medallion have experienced losses similar to any other .
The idea is great: looking at past events to understand how a particular event such as a sunny day in Manhattan or what did the Yankees do the night before (and millions of other things) influences the price of a stock.
If you have enough data you can go back and test if a signal is really a signal or just random coincidence.
But somehow it's not working as well as it sounds on paper, I think one of the reason might be that signals like that are very weak and they fade over time so you need leverage to make sure that you cash in while the signal there (because it's weak and subtle) and also you end up losing a bunch of money by chasing a signal which is on its way out.
The end of the article talks about valuing call options. I took a graduate course on Path Integrals for Derivatives Pricing in 2001 or 2002 so nothing new there. Since I didn't become a bank quant I never had any use for it. Linear models are much used on the buy side. Agree with the Nick Patterson quote below which says the same.
Hard to tell what the book is about. Sounds a bit like a Deepak Chopra take on economics. Throw a few mentions of "quantum" in there to sell books.
I like to think about Natural numbers as a basis for money (accounting) giving way to Integer numbers as a basis for money (banking). Going on along this trend, one gets to Real numbers (Finance? trading stock?) and so until modernity with Complex numbers as the basis for money (global financial? trading options and other derivatives?)
My actual opinion on that is that you used to have to go to the square where people were exchanging. People farther distances away simply were not there and had to wait in the newspaper to know what happened with prices. I don’t consider advances in communication to be controversial in that regard. I do think people should be aware of who sees their communications.
Not 'to trade' but to enter the small world of HFT. Many, many industries have capital barriers to entry, I don't really see the big deal. Just as I cannot start an HFT firm, I also cannot start a toy factory or restaurant without sufficient capital. If you actually look at how much money HFT firms earn relative to the finance sector as a whole, they're pretty small fish. They just pay a lot because they have relatively few staff.