"Computers and algorithms" is just a synonym for people; people implement ideas in code, design the algorithms, pick the weights for training machine learning, etc. No matter how many levels of abstraction you go through it's always people at the top. If anything happens in the society there's always a person who made the decision to make it happen somewhere. Deferring that responsibility to "computers did it" is dangerous bullshit.
The first person who walked the final path. Everyone who agreed with that person and walked the same path compounded his decision. Everyone who crossed the field should have thought about what they were doing at the time.
Far too many people, and especially developers, are way more focused on absolving themselves from responsibility for anything bad than they are willing to put the thought in at the beginning, and that's the root cause of a lot of problems.
But that person could not possibly have predicted the future actions of everyone else, and therefore it's a mockery of moral reasoning to hold them responsible.
> it's a mockery of moral reasoning to hold them responsible.
What are we holding them responsible for? Was something amoral? They clearly made the first decision for better or worse. Now if there was a sign that said "No walking on the grass", they clearly violated something.
You need to read it in the context of onion2k's original post and how onion2k's post relates to the article. There is a question of responsibility in that context, whether we like it or not.
Yes, this is a metaphor for people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society. At least, that's how I understood the relevance of the metaphor.
Criticizing "people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society" implies there is a conceivable alternative. That is it possible to build algorithms with a full understanding of all the negative consequences they could ever have for society. That seems obviously absurd to me, so the criticism is vacuous.
Er, no. The alternative is to not build those systems. Here clearly suggested is, not to let algorithms make all the decisions, but let people actively manage those funds. That might be generally less efficient (in terms of volume of trades and profit), but the assumption is that a total disaster (e.g. flash crash) wouldn't happen with "slow" humans in the loop.
How is a flash crash a "total disaster"? It seems to me that just describing it that way indicates your mind is addled by technology, since before modern times nobody would expect continuous pricing of everything every millisecond of every day or think that everything was worthless because it wasn't being quoted appropriately for an instant.
Someone has to take it upon themselves to put a sign there saying, "STOP! Habitat Restoration in Progress. Please choose another path."
And that's just it. Humanity has carved a path that says, "Make as much money as possible with the least amount of effort."
These machines have taken something we already know is bad, the 90 day, quarterly earnings cycle, which had already obliterated our long-term thinking down to nanoseconds and we want it even shorter.
We have machines moving virtual money around corporations that use money to move real material around the earth -- and beyond.
That is the path we are on right now in this very moment.
Who is going to put a sign up saying, "STOP! Humanity Restoration in Progress. Please choose another path."
That's a stretch. I know you're enamoured by your synopsis, but realistically emergent systems aren't designed or thought about. They might be tweaked.
Dirt paths are created by people making the same choice when there is not enough difference in grass levels for someone to notice. It’s only after a significant number of people make the same choice that feedback occurs.
Other systems may or may not be created in a similar fashion.
realistically emergent systems aren't designed or thought about
I'm aware of how they work. I'm arguing that someone who builds a system is responsible for it even if they don't know how it works. Ignorance, even in the face of a system so complex that no person could ever understand the underlying causes of what it does, is not an excuse. No one should be able to hide behind complexity.
Developers must either build in protections against their systems going wrong or they shouldn't deploy them.
I want to agree with your thesis, but it's impossible to foresee every possible outcome. Leaky abstractions aside, bugs aside, misaligned incentives (the new "I was just following orders", as someone here immortally put it) aside, it is impossible to imagine a priori all the ways a certain outcome that is desirable now will be undesirable in the future.
Every step of the development ladder is fraught with possibilities for error and catastrophe. To quote James Goldman:
"It is too simple. Life, if it's like anything at all, is like an avalanche. To blame the little ball of snow that starts it all, to say it is the cause, is just as true as it is meaningless."
It's useful to distinguish between responsibility and accountability here. The algorithms may be responsible for a particular outcome but the people who commissioned them should be accountable.
What about turnover? If the buck stops at the CEO, what happens when that person moves on? Is their successor responsible for everything that went on prior? Im not saying either way, just asking how that should work.
Is their successor responsible for everything that went on prior?
Yes.
That wouldn't even be a change to the current system. That's how it works now. If you take on the role of CEO and it turns out that years earlier the company did something terrible you will be expected to resign. It's one of the reasons they demand so much money.
Pretty much always if the incoming CEO can competitively negotiate and is a good hire.
Contrary to popular belief, the vast majority of career CEOs are good hardworking people. Like any high profile position, the outliers skew perception for everyone.
Lets take, for example, a red light camera software that is trained to recognize plate numbers and issue traffic tickets. The decision to design and deploy the software was made by a human. The decision to send you a ticket was not. Nothing to do with complexity or being wrong, there is simply no human anywhere who made that specific decision.
Right, so we just need to be smart enough (and have enough data) to centrally (and a priori) manage the collective and emergent action of millions of humans. Seems like a reasonable expectation.
I read in a self help book that a doctor, while being a child, used to like waking up really early the first day it snowed and make a wild path in the snow, just for giggles.
Everybody else was just following his path through the snow.
Isn't that the equivalent of blaming the first single celled organism that evolved rudimentary flagellum for, as an example, our current climate crisis?
I was imagining desire paths in a park or something. It takes a lot of walks to wear down the grass, and probably hundreds before a path becomes visible.
Having walked in both long and short grass I can tell you it largely depends on a number or factors. The largest seeming to be the amount of time passing between each person.
The path is optimized for A) Where people enter and B) Where they are going. If there is no purpose. A path is simply a path, and ascribing meaning to it is fruitless.
- Whoever gave cachet to certain characteristics that are present at specific points of the grass
- Whoever made it desirable or necessary that passersby reach a certain location at the periphery of the field
- Whoever set up restrictions to entry at certain locations around the periphery of the field
- Whoever made any changes to the field (including non-human intervention)
More generally, culture at large and historical precedent.
—
Imagine a green field of grass.
There are minor differences across the field, but some of these differences can be noticed: a slightly darker patch of grass, a slightly yellowed patch of grass, some stones of varying sizes, a section that is slightly raised from the rest of the field.
Around the field, other things can catch the eye: there's a road to one side, some trees of varying sizes, one currently in bloom.
Over the seasons, the profile of this grass changes. At some point there was a park bench to one side of the field; later, it was moved to another side. There was once a storefront near to one side of the field.
You going to hold the icecream man responsible when the path has a pothole in it? He'll claim a flow of people made it and it's not his responsibility. The fairy tale is useful because it focuses on one broad idea in exclusion of the others.
Who is the man holding the smoking gun when algorithms collapse? The business owner is legally left with the responsibility, but finding who programmed the algo to make a mistake is useful & vital.
Yes - agree. If you build a bridge (in many places) you have to sign off on the design - warranting that it is your professional opinion that it will do the job. To do this you normally need to be a chartered or certified engineer and you need professional insurance.
If the bridge collapses you get sued and your insurance pays out. You probably don't get more insurance.
This is not a perfect system, but we need several elements of it for algorithms. We need the ideas of tolerances, confidence intervals and analysis - certification of fitness for purpose must be limited and constrained to have value. We need professional certification that enables employers and the public to recognize experience and capability. People have confidence in pilots because of their uniforms, qualifications and the checks that they undertake (training, retraining, blood testing, medicals) on a regular basis. As we have seen with the 737 max, when these things breakdown there's trouble.
Except usually paths are made because they're the shortest path. Taking the shortest path is not random, even if you have a curved pavement path, people will cut it and walk through grass to get the shortest path. So it's basically a pre-programmed algorithm in our brains saying we want to spend the least amount of time.
A decision to follow the herd and a decision to take the easiest road are still decisions.
You might say that a person made a decision to break down blades of grass to create a path and that others chose to take a path already created, but they are still making choices that they are responsible for.
Imagine a small box where every day at 6 am, an instruction on paper is written and the groundskeeper checks it.
Imagine a network of voters who vote on the instruction to be placed in the box. Their wide variety of votes will be automatically distilled into the winning instruction.
The groundskeeper dutifully clears a path, shouts at people to stay on the path, fills in errant paths.
Finally there is one path.
Who made the decision?
I only bring this up to suggest “emergent” decisions is a vapid framework to analyze this. Every phenomenon involving the aggregation of different agents is an emergent one, whether it’s an emergent system of politics leading to regulated actions or an emergent system of market participants leading to a price.
This doesn’t distinguish a situation of high regulation from a situation of low regulation.
Clearly the most absolute answer is to have every individual track and identify their own commits and changes to the computer system in an authoritative manner. Then you must balance each catastrophic failure against the reasonable expectation that the programmer could predict the outcome. Then you effectively strip the license to program (in that company/industry at least) from the offending 'rogue traders' when they try to run away with billions of dollars that was built on exploiting said catastrophe and get caught.
Yes, but you could create the path just as you plant the grass and amuse yourself to no end on how sheep-like people really are.
Just because there's a path there, it doesn't mean its optimal or good for the public at large.
> "Computers and algorithms" is just a synonym for people
I don't think we can say this anymore. I wouldn't say that the makers of AlphaGo were the decision makers for any of AlphaGo's moves. You could perhaps say the players of the games AlphaGo trained on were. When it comes to AlphaGo Zero you can't even say that. At most you could say that the people were policy makers or tweakers.
Yes, but AlphaGo can't be used as a metaphor for a lot of the algorithms/automated trading strategies which were implemented based on research done by people, and carried out by computers. AlphaGo only trained against itself.
I agree with that it's always people at the top, however it doesn't imply that there's always one person who made the decision. Collective decisions exist too. And even looking at individual decision, not all decisions are taken through careful conscious means.
Plus, deciding to delegate decisions (to algorithms or anything else) is not the same as deciding through your very own direct perspective.
That was as true in the days of when pit trading dominated as it is nowadays.
It's true that automated trading is fast, and that means that things can go off the rails fast. And, it's true, that has led to some spectacular news-making events. But, for that very reason and others, most trading firms keep a very close eye on what their auto-traders are doing. Closer than they ever kept on their pit traders.
I briefly worked at a firm that did both HFT and pit trading, supporting one of the teams in charge of monitoring both groups. And what I saw was that the company was generally much more in control of the algorithmic trading. It was more heavily instrumented than a human could ever be, with humans (who were themselves licensed traders) watching the computers very closely at all times, as many layers of automated fail safes that would kick in if things went a little screwy.
By comparison, the pit trading was kind of a mess, and the team was frequently having to scramble to amend all sorts of human errors largely stemming from things like miscommunication and straight-up bad handwriting. Beyond that, I'm sure that, compared to the folks operating the auto-trading systems, the pit traders never had as firm of a real-time grasp of the finer subtleties of their positions, simply because nobody's able to do math in their heads like that.
Which, I think, is why spectacular events in the markets stemming from speculative trading going off the rails aren't really any more common nowadays than they were in the days before high frequency trading. Yeah, computers created some new ways to screw things up, but they also mitigated some existing ones.
The net effect is not nearly as bad as journalists make it out to be. But, of course, playing to people's fear of the unknown has always been the more lucrative route for journalists to take.
That hints at something. Now that you summarize it, it is very similar to my experience in a completely different field. The higher the rate of automation the higher the level of control. If things are done mostly manually it tends to be an uncontrolled mess with inefficient processes in place that create the illusion of control.
I think you're missing the point. Trading and Sales used to be people intensive, in today's parlance, 'high touch.' You had people talking deals on the phone, you had transactions sent to the exchange where a floor trader did the execution, you had a trader's assistant fill out the trade ticket and submit it manually to the mid-office, where you had a person who recorded the transaction then passed it to the back office, where yet more people processed and settled the trade.
All of those people are gone. No need, we have automated much of this deal cycle. Your best example is to take a trip to your local exchange floor, which used to be littered with runners, assistants, floor traders, etc. It's eerily quiet these days, as most trades are executed by computers.
Hmmm... I both agree and disagree. While humans program the general strategy, and even the bugs they didn't think they were adding, there is a substantial difference: computers can and will act in such a completely stupid way as to be obviously stupid to even the most uninformed human.
What I fear the most is a silly bug causing a crash in a way no manually triggered series of trades would.
>What I fear the most is a silly bug causing a crash in a way no manually triggered series of trades would.
Banks are run by smart people and they worry about the same things. There's dedicated systems checking to ensure algorithms aren't running away. All the big banks and brokers have been buying them for a decade or so. Nobody wants a crash and the algorithmic traders are mostly using a line of credit from the bank with which to do their trading and the bank usually take a commission on the profit so the banks have a strong interest in seeing them make money as opposed to crash the market and disappear into bankruptcy proceedings. Of course the government also swooped in and saved the day by mandating risk checking systems a couple years after everyone who cared was already using them (thereby solidifying the status quo for better or worse).
Could you not say the same kind of thing about mortgage backed securities, Fannie Mae etc?
We've had flash crashes, which aren't in peoples interests, they still happened. I don't think its reasonable to hand wave it away, because "smart people". Programmers are supposedly smart people,I've yet to come across a perfect bug free program.
>Could you not say the same kind of thing about mortgage backed securities, Fannie Mae etc?
Human based checks are easier to subvert than automated ones.
>We've had flash crashes
Which is exactly the problem that most banks and markets try to prevent. Any system that uses what other people are doing as a strong enough input is going to be susceptible to this to some degree. If you try to buy too many shares of something or for too disparate of a price compared to the market the bank will almost certainly kill your order. The exchange might kill it too. Depending on your contract with the bank and exchange they may kick your session for the day if you do it too much (once may be too much). The limits most banks place on what you can do mean that runaway algorithms can really only be a slow tug in one direction or the other which is no big deal.
"Human based checks are easier to subvert than automated ones."
I disagree. They're easily subvertable, but in different ways.
"Which is exactly the problem that most banks and markets try to prevent"
But they haven't succeeded so far. Further, as the saying goes, history never repeats itself, but it rhymes. It's all well and good doing all these automated checks based on past issues, that's not going to help with the next issue that isn't enough like the one before.
At the end of the day, both systems are subject to issues. Computers are far more disciplined than humans, but that comes with a tradeoff of lack of introspection.
Bee colony is just a synonym for bees; bees run the colony, bees build the hive, bees are the hive. No matter how many levels of abstraction you go through it's _always_ bees at the top. If anything happens in the hive, there's _always_ a bee that kickstarted it.
A collection of individuals and/or systems giving raise to a metasystem, a superorganism, does not and can not exercise deliberate control over it: the metasystem lies on a higher level. Not to mention as this thread - in its totality - surely proves, that individuals in many cases are completely unaware of the layer of control above them. What does an individual bee know about the colony?
What is "dangerous bullshit" is wishful thinking. Nation states have been with us for milennia. Corporations for centuries. Have we learned nothing that we still entertain notions of (human) global control?
That is exactly what happens. It's direct human nature to defer that responsibility to "computers did it", and to believe they will not in the future is naive.
The second part of that is that the human at the top can only make decisions on what is visible to them. With fairly opaque ML systems the only thing easily visible is the results. Seeing how it came to that result is much more challenging. So we will have humans at the top that can only makes decisions about the results and not the means. Which is the part many, including myself, find troubling.
True to a point. However, that person doesn't necessarily know why a computer made a decision. The high dimensionality of these questions and decisions can be solved by programming, but the question and solution aren't completely understood. Thus, we enter the age of AI.
That doesn't change the fact that they're responsible for the outcome.
A reasonable metaphor is someone firing a gun in to the air: they can't claim not to be responsible for where the bullet lands despite having no real control over where it's coming down. If you don't want that responsibility don't pull the trigger.
Right, but that doesn't change the fact the stock market has changed from people taking careful aim before pulling the trigger, to a model where people are mostly looking where everybody else is firing, then firing in the same direction.
Yeah, a lot of people warn about the dangers of AI. And it's real - handing over decision making we don't understand to computers comes with real risks.
Yes, but when computers do things that humans do there is simply less human oversight. If a human is taking action to enforce a policy that will harm other people, there's the chance that he'll resist it or raise awareness.
If an algorithm does it, there's no final line of defense. And software is easy to scapegoat because it's murky. You can claim "we didn't know the algorithm would do that," but it's harder to scapegoat your employees when there's emails on record showing you specifically told them to enact a social harm.
> If anything happens in the society there's always a person
Who decided to make George W Bush president? I highly doubt you can assign responsibility to either a single individual or even fully enumerated clear subgroup of individuals.
There's also a person who can/will yank the power cord out of the wall when/if it looks like the algo is going haywire and losing a ton of money, like happened during the quant quake.
For every automated trading system that is operating on major exchanges today, there are people responsible for its actions. If a system disrupts a market significantly, regulatory agencies work to be sure the responsible people are fined, banned from trading, or even imprisoned. There is no deferment of responsibility to "computers did it".
> If anything happens in the society there's always a person who made the decision
This is not true in case of machine learning. Yes, there was a person or people who designed the model, gathered training data, trained the model and deployed the result. But the decisions the model is making as part of its function are not made by these people.
Most of the time, people who trained the model cannot even tell you why the model is making the decisions it is making. Tractability of ML models is currently a very hot topic in the field and progress is being made, but a lot of currently deployed models are inherently intractable.
Set your spouse’s alarm clock for 4 am. When it goes off, calm them down by pointing out that while you set it and you turned the volume up to 11, it wasn’t your decision to disrupt their sleep; they should blame the timer inside instead. Let me know how the couch feels afterward.
Seriously, the responsibility lies with whomever made—or okayed—-the system. One-off things, especially if they are hard to foresee, might get a partial pass. Otherwise you ought to be carefully testing and evaluating your own system instead of blaming gradient descent.
> the responsibility lies with whomever made—or okayed—-the system
And generally speaking these are very different people, but that is besides the point. I am still unsure why we are talking about testing and evaluating, as if we are only talking about failure cases.
If I set the alarm clock for 7AM and it starts ringing at 4AM, the fault is with the clock designer. But if I set it for 4AM and it starts ringing at 4AM, it is somewhat silly of me to start blaming the clock designer for my disrupted sleep, isn't it?
And we are not talking alarm clocks, we are talking about systems that make complex decisions based on hugely multidimensional data. Decisions that we provably cannot explain. There is no amount of testing and evaluation that can ensure that these systems are error free. But when they work, they do work well and provide a competitive edge over human analysts.
> if everyone hates the current system, who perpetuates it?
And Ginsberg answers: “Moloch”. It’s powerful not because it’s correct – nobody literally thinks an ancient Carthaginian demon causes everything – but because thinking of the system as an agent throws into relief the degree to which the system isn’t an agent.
It's automation taking away the lower level trader and analyst positions. These were jobs that were very lucrative. This creates an even more rarefied, winner-take-all atmosphere, and makes an industry that was already considered amoral downright sociopathic.
What is missing here is the Big Picture: trading activity has totally decoupled from value creation, and technology has sped up the process exponentially. These are not investors, they are speculators, and it is creating a form of technical debt that has the inherent capability to take down the productive parts of the system.
Trading activity was never about value creation to begin with.
Non-finance people seem to think the stock market only exists for investing, which isn’t true at all. In fact outside of an IPO you are not investing into a company, but just trading a piece of a pie that already existed. The stock market has always been primarily about trading, and informational arbitrage is simply more efficient when you have computers calculating it
> Non-finance people seem to think the stock market only exists for investing, which isn’t true at all. In fact outside of an IPO you are not investing into a company, but just trading a piece of a pie that already existed. The stock market has always been primarily about trading, and informational arbitrage is simply more efficient when you have computers calculating it
Baloney. Corporations compensate employees, management and executives in stock. Furthermore, they may use their stock as debt collateral, or flat-out purchase investments [e.g. startups or other companies] completely or partially with stock. For example Facebook purchased Whatsapp with $12 billion of Facebook shares, $3 billion RSUs, and $4 billion in cash [1].
Moral of the story: The higher your stock price, the lower your cost of capital.
What exactly are you arguing? That stock based compensation or usage in an acquisition means the stock market isn’t about trading?
Sure, the more valuable your company is (let’s not say stock price because really it’s your market cap and some other factors) the more it can leverage that to make deals. Not sure how that’s related to the function of the stock market itself
Companies save cash by paying employees in stock instead of cash (and saving cash is just an indirect way or raising cash, a penny saved is a penny earned etc.) Companies cannot pay employees in stock unless the employees have a way of eventually selling that stock. Employees can only sell stock because a robust secondary market exists for buying and selling stock. Therefore, the continuous trading of stock helps companies raise cash and “create value” even after the IPO.
When a start-up is privately held, they might raise a round of funding to grow their team. Investors get a piece of the company, the employees get paid, and the company gains an employee at the cost of diluting their equity.
When a publicly traded company wants to grow, they can give the employee stock. The employee can turn around and sell the stocks on the market. As above, investors get a piece of the company, the employee gets paid, and there are more outstanding shares.
I'm not OP but I think their point was that the two cases are not so different. The line between what is "trading" and what is "investing" is not so easy to draw.
> Trading activity was never about value creation to begin with.
If that's the case, why should it exist at all? Why don't we just outlaw it and send all the smart people working on it to go write software or cure cancer?
Allocation:
(1) stock market rebalances share prices somewhat well, which means that
(2) companies who produce more growth are valued more highly, which means that
(3) those companies, which frequently hold some sellable stock, can receive more money from investing those stocks, which means that
(4) the economy grows more.
So since the stock market allocates resources well (in comparison to old people making deals on golf courses). I'm not too convinced by step 3. Companies focus too much on making shareholders happy, who often have their own short-term deadlines.
Convergence:
Given how quickly stock transactions are made nowadays, we can assume that the prices converge pretty quickly. As an extreme example, suppose a war started a day ago. In the past, people could hide this fact from you when you made an investment if you didn't read the newspaper in the morning. With the stock market, you can assume that the price of a stock reflects nearly all publically available info. You don't need to "do your own research"
Investment Fairness/Efficiency (not sure word would describe this):
Investing in companies directly is more "efficient" than investing in a bank or bonds. You don't need to give money to some large investment firm, which takes their cut before directly investing in large companies. The "shortest path" (path with the least middlemen/overhead) to investing well often involves the stock market. You don't need to wait for the government to slowly do something with your money. You get to take advantage of all the research that people have done to make the stock market prices reasonable.
I've only taken a college class on finance so this is a question I also want to know as well. How does the stock market (or secondary market in general) truly benefit the economy and what are it's limitations?
A dividend just reduces the value of the company by x cents per share and pays you x cents per share in cash. It's not "free income" as most people seem to think. It makes more sense to think of it like a buyback.
You are speculating in a company even it pays dividends. There is no guarantee dividends will continue. Are you trying to say there is a difference between a speculator and an investor? If so please explain.
If the company pays dividends, you may purchase a stock with zero expectation that the price of that stock will rise. There may be risk, but some of that risk is that your asset will stop delivering value to you, not soley that other speculators will not be willing to buy that asset from you at a higher price.
Speculators have shorter windows than retail investors looking to store value for long-terms gains. Owning a non-dividend stock in my retirement fund is not equivalent to speculation.
Because what the market will do between now and next week is pure speculation. Over the next 30 years you can expect value to grow about 7% per year on average.
I'm not sure that continuous growth can be reliably predicted over a 30 year span. There is reason to believe that we may need to move away from economic systems dependent on continuous growth. (edit: though I highly suspect this is more than 30 years away.)
Buying a range of stocks and holding them for the long term may reduce your risks, but that doesn't mean it isn't speculation.
Speculation just means that you are buying something with the expectation that you will sell it for a higher value. When your assets don't produce direct value (such as dividends, rental income, functional utility, etc) then investment in those assets is speculation, regardless of the risk level or time frame.
Just because the US stock market has worked out in the past century doesn't mean it will continue to reap those types of gains. I you would have invested in the Nikkei 30 years ago, you would still be waiting.
The very act of investing is the act of taking on risk. It's not free money, you are being rewarded for taking on the risk.
Every disclaimer you will see in the trading world has the disclaimer "Past performance is not an indication of future performance".
Everything in trading is probabilities, which is a way of thinking humans mightily struggle with.
A good day to day example is when a weather forecast calls for an 80% chance for rain on the weekend, so you cancel your camping trip. It ends up not raining, and you curse the meteorologist for being wrong.
They were not wrong - yet most people say they were, showing they are unable to think in a probabilistic way.
> FIFTY YEARS ago investing was a distinctly human affair. “People would have to take each other out, and dealers would entertain fund managers, and no one would know what the prices were,”
Was this, on the other hand, good?
If we take the doctrine that a market is a system for transforming information into decisions through the medium of prices (see e.g. Red Plenty), then using an increasingly sophisticated information system to do it seems only right and natural? And that it should reduce overheads of fund management?
(As an aside, that initial-cap "F" looks really nice and is a neat bit of CSS)
I agree - it is far better to have computers do it than run the market at "the clock speed of a babushka at a market stall" as Francis Spufford put it.
Although in this case it'd be the clock speed of a fund manager on a golf course.
The only decisions being made at the nanosecond level are whether to quote at 148.23 or 148.24. So yes, that’s enough time for trivial decisions like those to be made.
That comment was obviously a metaphor choosing "nano" as it's the coolest prefix, but since this is Hacker news I suppose now we have to fact-check a metaphor.
A Geforce 1080, at the high end of "normal" computing claims about 350 gigaflops. Divide through and we get 350 floating point calculations per nanosecond as an absolute maximum. Depending on what the numbers actually are and what method we make the human use (expert abacus users are surprisingly fast) that looks more like an hour's work for the human.
This just highlights how absolutely tiny a nanosecond is, though. Those speeds are only valid for numbers already within the processor; fetching from DRAM could be hundreds of nanoseconds. Also recall Grace Hopper's excellent video with a nanosecond of cable length, which is about a foot long.
> This just highlights how absolutely tiny a nanosecond is, though. Those speeds are only valid for numbers already within the processor; fetching from DRAM could be hundreds of nanoseconds. Also recall Grace Hopper's excellent video with a nanosecond of cable length, which is about a foot long.
This is exactly what I was intending to highlight. The speeds at which High-Frequency Trading operates are so ridiculously fast that even a computer can't put much "thought" (let alone more considered "wisdom") into a problem.
It sounds like you’d be surprised about how many industry leaders come from STEM backgrounds. If you take a headcount of the partners of all the quant firms that have been profitable over the last ten years, I’m not sure if STEM nerds would be the majority but they’d certainly make up a significant portion.
I’m pretty sure the Rentech and DE Shaw founders were mathematicians. I’ve heard Simons in particular is stupid good at math and there are apocryphal stories to that effect. Wouldn’t surprise me if other quant funds were also ran by the math-heads themselves as well
The problem is the not so rich don't want to play or can't afford to play. I bet I do better than most traders out there and have been calling the market correctly for a significant period of time but I can't get enough money to move into the billion dollar range. Others who have the money aren't willing to risk it because they have made it using other methods are more likely to continue reinvesting in that, i.e. real estate.
Overall, it doesn't sound to bad. What caught my attention was this line:
> Another gripe is that traditional asset managers can no longer compete. “Public markets are becoming winner-takes-all,” complains one of the world’s largest asset managers. “I don’t think we can even come close to competing in this game,” he says.
Individually, this doesn't really say much. But various sectors becoming "winner-takes-all" seems to be a trend right now that concerns me. Some say it is just a trend, a new age of conglomerates, and it wont last. I hope they are right.
Historicaly, this is not the first time we see that trend. I do have the theory that some technologies inherently lead to larger enterprises, while others lead to small ones. Think about steam vs. electrical motors for a quick image.
The internet seems to be of the first kind, but since it has a large social component, I wouldn't make any large bet on this property holding forever.
There is also an unrelated effect that the same technology tends to enable larger and larger enterprises when it matures. But this one seems to be much smaller, and not really relevant right now.
If the stock price is contrived, it's contrived, and the only difference between humans arriving at a price slowly and computers arriving at it quickly is how much time and money gets consumed in transaction costs.
If a company wants their stock to have a more stable price, they need to make the price less contrived, for instance by paying dividends that anchor the price to something tangible.
Well, is this point of raising awareness of this sort of thing just to say "these things are bad"?
Often it's helpful just to say "Let's discuss the potential downsides so we can get mechanisms in place to protect society before they get cemented into place."
Most of the comments strike me as negative, like the article's discussing a bad thing. This is weird because the article itself is very positive.
From the conclusion:
> It is natural to be fearful of the consequences, for it is a leap into the unknown. But the more accurate and efficient markets are, the better both investors and companies are served.
Sure this is bad for people who wanted to do the jobs getting automated, much as self-driving trucks are bad for people who wanted a career in driving trucks. But if we automate away all of the jobs with machines that do them better than humans ever could, is that really a bad thing?
I mean, sure, we'll need to rework our economic systems to adapt to a world in which the economy neither requires nor desires human laborers -- and, yeah, that's gonna require quite the overhaul -- but is a world in which people don't need jobs to survive really so scary?
I think part of the skepticism isnt coming from automation itself, but that automation has caused essential yet extremely complex systems to become tightly coupled. A classic example of how this can be problematic is the 2010 flash crash. In these kinds of systems it can be very difficult to know that non-equilibrium behavior of the system is sane. When people are an active part of the decision making process they can put the breaks on clearly nonsensical behavior, but with increasing automation a crisis could be created out of random fluctuations and become a major problem before anyone realizes what is happening.
Why such an optimistic assumption that the extreme rent seeking of today won't just balloon to unknown heights in the wake of a seismic shift between labor and capital, creating a permanent class of haves and have nots? Especially once personal security, law enforcement, and military operations are automated...
There's certainly a Brave New World style dystopia available in your scenario where only a select few are free and even fewer are wealthy.
I'm not particularly worried about dystopian outcomes like that because I don't see how they'd actually happen.
I figure that there're basically two cases:
1. Automation kicks in slowly enough that the government isn't toppled, allowing people to simply vote on a more equitable way of doing things.
2. Automation kicks in quickly enough that powerful individuals can openly defy the government, acting above the law.
The first case doesn't seem problematic. I simply can't see a society with >99% unemployment and mass poverty, with booming wealth in the hands of <1%, not voting to distribute the wealth, so long as they have the ability.
The second case would then have sub-cases, depending on how the powerful-faction chooses to behave:
a. The powerful-faction chooses to comply with the government despite being powerful enough to topple it. Then stuff turns out like in Case (1) anyway.
b. The powerful-faction refuses to trade with everyone else. Then they'd have their own, exclusive economic system, but everyone else could still trade with each other. Then they might not be as wealthy as the powerful-faction, but they could still be much wealthier than people today.
c. If the powerful-faction chooses to kill-off or cage everyone else, that'd be bad.
So, sure, there's that hypothetical scenario in which people who rapidly rise to power would leverage that power to kill off everyone else. But, that just doesn't seem particularly plausible to me. And most of the other scenarios seem to have positive outcomes.
- The powerful faction cages everyone else, but not necessarily in literal physical cages but rather psychological cages. Provide bare enough for survival, and use psychological techniques to engrave in everyone's mind that "this is fine".
I feel like this comment sounds rather edgy, but to me this is the most likely dystopian scenario if we're anyway talking about powerful-factions > government.
I find it easier to select what companies not to buy. It's hard to do it cost effectively and with low risk. Short selling or put options is not the answer.
Idea for a new fund directed at retail investors:
S&P 520 index fund. It's basically just low cost S&P 500 index fund, except that the index tracks 520 largest companies listed and individual investors can remove any 20 companies from their index. If you don't know what to drop, the fund managers drop the 20 smallest and the index becomes S&P 500.
The catch is how to implement this strategy with low cost. In practice the fund as a whole would take input from every fund owner at the time they do rebalancing. Then they would adjust the ownership of the fund based on how the investor performs relative to the whole fund in the next rebalancing. How to implement this may be tricky.
The name for the product category you're broadly describing is "smart beta." This exists in quantity. "SPY but without overvalued stocks in it", etc etc.
The reason the exact product you want doesn't exist is because it will have the same returns as SPY but cost 20X as much due to lower economies of scale, broadly because people don't want it.
> but cost 20X as much due to lower economies of scale, broadly because people don't want it.
For any existing fund manager who tracks an index (i.e. Vanguard), this shouldn't be too difficult or expensive. They're already buying/selling a TON of SP500 shares at the close of trading day. Now they just need to subtract some of those buys/sell for the overvalued companies that people do not want. Am I missing something?
EDIT: Oh, I guess the (SEC?) would require a different fund management and prospectus for every index "smart beta" permutation, and therefore wouldn't be realistic.
Whenever I feel the need to use the adverb "just" when talking about something I think someone else should do, it usually means my understanding of the situation is the tip of an iceberg I can't fully see.
Just buy SPY with 95% of your money and then allocate 5% to put options on the losers. if they don’t work out, it was just a hedge. if it did, you had your cake and ate it too.
In terms of making it a fund, I don't see how it can done effectively. But if you're an individual investor that knows what you're doing (ie have experience with options, etc), it's fairly straightforward to just allocate 5% of your portfolio to liquid long-dated put options.
Is picking the rotten apples at the store just as hard as picking the very best?
If you made an index of penny stocks, do you think markets are efficient enough this would be basically as good as an S&P 500 fund? What if you made two indexes, one of penny stocks that pay dividends, and one of those that do not?
I've done this experiment years ago with a stock market simulator utilizing real market data, and it suggested that, no, there is not a price at which total shit is a good investment, or if there is, it's not the market price. Therefore it follows that excluding that shit from your index is potentially worthwhile.
Penny stocks are usually traded OTC and aren't in any index fund I've seen anyway.
Picking losers is just as hard as picking winners. If you think you know of a loser, just short it. You'll be hard-pressed though, because other people, likely with more info than you, have already done this, and the price has already come down.
Why do you think shorting a stock is (uniformly) just as easy as going long? I don't know where you people get the idea it's symmetric. And the difficulty of shorting a stock varies massively from one company to another, which is a not irrelevant aspect of it. You can't (legally) print unlimited shares yourself to sell short.
If you could you make money shorting rotten apples at the store, visibly rotten apples would no longer exist and you would not be able pick them out at the store.
The EMH applies to stocks, not apples. Your analogy is nonsensical.
Wait, I thought the efficient market position was that the "rotten" companies were all priced correctly. But you seem to be saying there aren't any.
Some recent research showed that the market doesn't even take into account the contents of SEC filings which seems like the minimum for a semblance of efficiency. I have to say, if you are going to take the position that there are no worthless public companies out there, you ought to take a sizeable sample of 10-Ks and just glance at the business description of each.
Also...I'm not an economist or expert, but what I think you are saying is that the EMH only applies even in principle to things you can sell short and that sounds wrong to me.
What does picking apples have to do with trading financial assets?
Analogies are useful to illustrate an idea, but they lack any value when establishing a chain of reasoning (i.e. providing proof that a statement is true).
Analogies always match up in a limited number of ways, so saying that the things in an analogy are different in some ways is not a specific criticism.
Apples are sold in markets. Most of us are not experts on apples any more than stocks. Yet we use what we do know to select from what is available, rather than having faith in market efficiency. Because there is information that the non-expert can easily perceive.
How can that be wrong with stocks? I'm getting the impression people are attributing the difference to the possibility of shorting stocks, but I don't think that is necessary in theory for efficient markets, and selling short is clearly not just as easy as going long, so the symmetry isn't there anyway.
It's easy to identify patterns that let you consistently (consistently enough to make more than you lose) pick companies that will win or lose a little. That's why that task is now automated. Picking the companies that will win and lose by orders of magnitude is hard to do consistently.
I think with a sufficient number of investors, this fund would correlate very strongly with the S&P 500 itself. So it should be relatively cheap for the issuer to simply buy the S&P 500 and enter into a swap agreement to cover any difference.
Wouldn't the effect of, say 15 of your picks underperforming(good for you) be dwarfed by the other 480 ones performing the same? Couldn't you simulate this fund by buying s&p index fund and buying puts on your 15 picks?
It's actually the other way around, at least historically speaking. Typically a very small number of names account for the lions share of the index return.
The stock-market is a gamification of financing. While financing has been invented to direct money to the most useful project, stock-market transformed this usefulness function into a score: the return on investment.
The problem of the algorithmication of finance is not that we don't know where it's going, but that it's lost its objective of optimizing for utility in favor of optimizing pure profit. This has been a problem since the 80s with the "greed is good mentality". But that's not true, maximizing profit does not maximize societal utility.
It surprises me that a multitude of separately developed systems acting in the same 'playground' don't give rise to more emergent behavior. I would expect accidental death spirals as systems unknowingly couple together and get caught in a loop of failing together, along with the polar opposite of just astronomical growth spurred by nothing but the systems bidding logic interlocking in unexpected ways.
Sure there have been catastrophic failures like Knight Capital, but those, at leas the ones I am aware of, all boil down to the system having flaws. I have yet to see a system which worked and followed the logic the designers intended, but which is accidentally (or purposely?) exploited by other automated systems doing the same. Shouldn't the complexity of these systems make such a thing almost inevitable and common?
They undoubtedly do to an extent, but the exchanges use price limits, trading halts, and mandatory nonstop human supervision to keep things under control and allow human operators time to kill errant systems that are losing money. Trading firms and clearing firms require additional safeguards to limit instability and losses. Also, if some group of trading systems does go wheels-off in a positive feedback loop, there's likely to be another, larger group of trading systems operating correctly and taking the other side to eventually stabilize prices.
The growth of "dark pools" for electronic trading off of the stock exchanges has also been an important development in the last couple of decades. https://en.wikipedia.org/wiki/Dark_pool
One possibly interesting consequence of this is that companies that want to optimize their stock value may now try and find what factors the financial algorithms are looking for and optimize their business based on that. Might be an opportunity for some creative upper management people?
They're still going to be factors that a human already thought was relevant to making investment decisions. Machine learning data pipelines don't write themselves.
In theory the small investor who has the freedom of not being obliged to track the index and is small enough not to be troubled by low liquidity could potentially exploit these advantages?
Some day we'll have central AI-planned economies, and we'll laugh at our "free markets" of olde. Computers will allocate capital far better than humans can.
How will an AI decide what the optimal allocation of food for the hungry is, how many computers everyone needs, and whether we should build basketball courts or if it's just a waste of time?
Those are all value judgements, and capital allocation is a value judgement too. I don't think an AI can optimally make decisions like that without the input of people. And an economy based on the input of people is still a free market. Perhaps with an AI in control and personal input we could make a much more democratic market though, which would be interesting.
You say that as if you think we will have a choice in that matter. A centrally planned economy is the kind of numerical optimization problem that computers are far better at solving than humans are. Eventually, some country will implement an AI-driven centrally planned economy that will function far more efficiently than our current market-driven model, at which point everyone else will either out-competed by this new more efficient system, or adopt something similar.
Eventually, some country will implement an AI-driven centrally planned economy that will function far more efficiently than our current market-driven model
Your core assumption is that it's possible for an AI-driven centrally planned economy to be "far more efficient" than a market economy. There are two problems here: getting your AI the information they need to make the right decisions, and the political risks of implementing a perfectly efficient centrally planned economy, which will by necessity cause all kinds of suffering and death as a result of its allocation decisions (which, by the way, will probably be guided by value judgments made by humans, but I digress). Both of these are huge challenges, and it's not clear to me that either of them will be easier to accomplish with a AI-driven planned economy vs. a pure market economy (which also has its problems, I know!). They're ultimately just two different kinds of algorithms, and it's unclear to me that the top-down centrally planned approach is necessarily better, even given general AI and tons of input information.
"We'll hand basic trading decisions to a microcomputer? Doubt it." - 1980s wall street.
Increasingly, the whole world is algorithmically analyzed and driven. Decisions to hire and fire people, or whether to invest, or what type of product to build. College students are following this trend by majoring in CS and data science, while liberal arts departments nationwide are downsizing. It's only a matter of time before these algos are unified and all human decisions take a backseat to the decisions of machines.
Ok time for a stupid question: if machines trade 90% who is on the other side of buying , or if a machine (or algo or whatever) sells to another machine, cant it be manipulated?
Flash Boys is written to entertain, though, not inform. It is littered with inaccuracies and dramatization to the point of lying.
Front-running, for the record, is illegal and nobody does it. Michael Lewis perverts the phrase to mean "using publicly available information and extremely expensive- though publicly available- radio technology to move stock information faster than competitors". Which you might still think is "unfair", though I would argue it's only unfair in the same sense that WalMart and Target make it hard for small shops to compete because they can't afford ultra-efficient trillion dollar supply chains. It's not illegal, it's objectively more efficient, and the only thing that prevents anyone from "just doing it too" is capital.
"Dark Pools" by Scott Patterson is a much more educated and in-the-know look at electronic trading.
>> Front-running, for the record, is illegal
I didn't get the sense that it is illegal from the book. Could you please elaborate what you understand by front running and why/how is it illegal.
>> Michael Lewis perverts the phrase to mean "using publicly available information
Per my understanding Michael Lewis is referring to the fact that HFT firms were able to race faster than the original trade executions and execute part of the trade, due to it being spread over multiple exchanges with different latencies.
While this may not be illegal, it surely sounds unethical.
> I didn't get the sense that it is illegal from the book. Could you please elaborate what you understand by front running and why/how is it illegal.
Front running involves placing a trade based on nonpublic information. It's a type of insider trading that has been going on for at least hundreds of years and it has nothing to do with HFT. The classic example is a broker placing orders ahead of their own clients in order to profit off the market's reaction to the client orders.
An HFT system placing orders based on public data feeds is not front running.
I'm not sure Michael Lewis said it was so much "unfair" as it was "nonproductive". The HFT firms weren't creating any value for the markets. They weren't making markets. They weren't helping liquidity except on paper by making every trade show up as multiple trades. At the end of the day they held zero position.
The problem is that they were basically taxing everybody who couldn't build quite as close to the physical location of the servers as they could. They were just parasites.
Every non-fraudulent profitable trade makes prices more efficient (in either time or value), and in this case it's time. Their competitors aren't you and me, and someone is going to make money from the arbitrage, so what's the big deal? The capital they spent to set up their edge just comes out of the profits that their competitors would be enjoying without them.
But HFT made the trades take longer, making them less efficient. By buying up the stocks while the trade was still in route, it cause the trade to fail and to make the brokers try it again at a higher price, wasting their time and money.
Trades get canceled/rejected because the price moves away from their limit/immediate-or-cancel order all the time, even without any HFT involvement. And the reason those orders don't get filled is because they don't reflect a competitive bid or offer for the security. I don't think you're suggesting that we should accept less-competitive orders just because the market participant took the time to submit it.
The only difference with HFT's, and market makers, and all other high-speed/high-frequency participants in the mix is that these changes in price happen more often, which indicates that price discovery is more efficient (has better granularity, recency and accuracy). (Unless the market activity doesn't have 'economic merit', which the SEC devotes a significant amount of time to investigating).
One might argue: "do we need microsecond-level granularity on the price of Amazon?"
I'll take the Matt Levine route and ask: "Do you think quotes should update once per minute? (I suspect most people will say yes). How about once a second? (Yes?) Ten times per second? (?) Every millisecond? Microsecond?"
Now ask the flip side: "Should it be illegal to perform market activity every minute? Every second? Every...?"
It's hard to draw a line with any kind of solid reasoning. As an economy, we certainly reward people who can make these sub-second adjustments with a lot of money, and in general with the stock market, where every trade is, by definition, two consenting parties agreeing on a price, usually making money means you're improving market efficiency.
Also, I want to point out that it's not like HFT's are invincible magic money-stealing boogiemen. HFT profits are declining year-over-year (look at Virtu's recent earnings numbers and their current corporate strategy/direction) specifically because other market participants are responding to their existence and getting smarter about their own order execution.
> Trades get canceled/rejected because the price moves away from their limit/immediate-or-cancel order all the time
When your trade is occasionally beaten by someone at a different brokerage (or the same brokerage) that's normal. When there is a bot on the wire doing it every time that's a problem.
The changes in the price happen more often because they are marking up the price while the trade is still in progress. This doesn't help anybody except the HFT firm. Discovery isn't more efficient because the discovery has already happened, they set the price based on what they had discovered.
HFT profits are falling because people got wise to their system and built countermeasures.
Front-running is illegal and no more integral to automated trading than fraud is to accounting.
HFT is just rapid trading. Think of trading as communication. Just as better communication allows a group of people to arrive at a more accurate consensus, faster trading allows markets to arrive at better prices. Sure, there are criminals that also profit from better communication until they are caught, but that doesn't mean we should all go back to using the pony express.
Flash Boys presented a highly-skewed view of automated trading. For a balanced perspective, I think readers of the book should also read reviews by those who are in the business of trading.
How are inputs entered into these systems. Consider for example Trump's decision to withdraw US from Syria. I assume there is a human somewhere that needs to tweak a variable "Conflict risks in the Middle East" or something as soon as possible after this news breaks? That can not be automated... or?