Going by the operationalistic idea of the article, which I interpret as duck typing, I think the key operations that can be applied to programmes are building them, running them, and measuring the build and results of the programme with respect to a certain goal. This applies both to a traditional programme, and to deep learning. Programmes are after all written to satisfy human needs, even if it is an obfuscated C contest.
For instance, imagine you have a black box that observes the horse races, Twitbook, the betting market and so on, and based on those observations executes bets for you with a bookmaker. The execution of the orders has a measurable effect on your net worth.
You might write a traditional programme which takes all of this data, and based on some ETL, statistical models and probability calculations, executes orders.
You might do some ETL, plug it all into a neural network, tune it and execute orders based on the results.
Your traditional programme is very complex, and combinations of small bugs may have large effects on the results. Your unit and integration tests may themselves be wrong. Formal testing possibly reduces the expected value of the system and is an arse to carry out for any large system. The expected value of the system itself becomes harder to reason about as the system grows, based on the operation of reading and understanding the code.
The internals of your neural network are also difficult to reason about in some ways. It is difficult to understand the workings of your neural network and specific parts' effects on the measured effects of its output. It will take time to tune it and build the most profitable model.
Both implementations of the black box may be backtested, and some sort of trust can be established over the expected value of each implementation. Both implementations allow the operations of running, and measuring the results of running. Both implementations are difficult to reason about in various ways.
We are perfectly happy to give money to people for them to do things without fully understanding their inner thoughts and the processes behind those thoughts.
>We are perfectly happy to give money to people for them to do things without fully understanding their inner thoughts and the processes behind those thoughts.
yes, but we wouldn't say that we're programming them, which is the problem with the operational definition, it applies to everything. If my drunk uncle is great at horse-betting and I just need to give him a nice sixpack of microbrew and get measurable net worth increase out I've not turned into a computer scientist.
Hence my argument that legibility is what matters. Programmers must be able to reason, and rearrange, and understand relationship between syntax and semantics of a program.
I think it's more accurate to compare deep learning to running a sort of physical experiment, rather than programming.
I agree that deep learning is not programming, and that maybe the title of the post is wrong. However, I agree with the sentiment of the article. The contexts that both operations are carried out in are closer together than the context of employing your uncle is to either. I was trying to highlight that it is more useful to focus on the similarity of the contexts and the net benefit of either method.
Also, if your uncle is better than your computer, then stop programming it at all. However, if he was actually any good, then he shouldn't be talking to you, and you shouldn't be giving him any beer. Unless he was banned by the bookmaker.
For instance, imagine you have a black box that observes the horse races, Twitbook, the betting market and so on, and based on those observations executes bets for you with a bookmaker. The execution of the orders has a measurable effect on your net worth.
You might write a traditional programme which takes all of this data, and based on some ETL, statistical models and probability calculations, executes orders.
You might do some ETL, plug it all into a neural network, tune it and execute orders based on the results.
Your traditional programme is very complex, and combinations of small bugs may have large effects on the results. Your unit and integration tests may themselves be wrong. Formal testing possibly reduces the expected value of the system and is an arse to carry out for any large system. The expected value of the system itself becomes harder to reason about as the system grows, based on the operation of reading and understanding the code.
The internals of your neural network are also difficult to reason about in some ways. It is difficult to understand the workings of your neural network and specific parts' effects on the measured effects of its output. It will take time to tune it and build the most profitable model.
Both implementations of the black box may be backtested, and some sort of trust can be established over the expected value of each implementation. Both implementations allow the operations of running, and measuring the results of running. Both implementations are difficult to reason about in various ways.
We are perfectly happy to give money to people for them to do things without fully understanding their inner thoughts and the processes behind those thoughts.
Which is the golden duck?