Hacker News new | past | comments | ask | show | jobs | submit login
“Computer science is not about computers” (quoteinvestigator.com)
288 points by akakievich on May 30, 2021 | hide | past | favorite | 277 comments



My view is quite unconventional, but I believe, computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description).

Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).

Of course, there are larger sets, but they mostly serve as a theater (just like countable infinity is just a theater of computer science).

Finite sets in the rest of mathematics are sometimes considered uninteresting, but not so in computer science, where only very small sets (roughly of up to 32 elements, all subsets can be easily checked on a typical computer) are considered uninteresting.

Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.


> I believe, computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description).

This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.

> Compare with most of "legacy" mathematics, which studies countable structures (so the description can use arbitrary series).

Define "most". Do it in a way that makes real and complex analysis and topology (and probably many other branches) the smaller part of mathematics.

Most importantly though, my problem with this kind of discussion is that the question itself is meaningless. Not everything can be classified into neat " X is a Y" relationships. Not everything needs to be classified into such relationships. Even if the discussion reached a consensus, that consensus would be meaningless. Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.


> Computer science is a part of math? OK, but so what? Computer science is not a part of math? OK, but so what? Neither conclusion would tell us anything useful.

I assumed the implication here is that CS, like math, is considered by many to not be a science, but rather a field of construction based on logic. The obvious problem with calling computer science a science is that it isn’t fundamentally based on measuring empirical evidence of a natural process. Maybe that still lands in the ‘OK, but so what?’ category, on the other hand this has been much discussed re: math, and it may be useful to clarify in what ways CS is not employing scientific method.


> it isn’t fundamentally based on measuring empirical evidence of a natural process.

What leads you to say this? If computation is in some sense the construction of certain forms of mathematics, is computer science not then the empirical study of computers (the objects which instantiate the math) and computation (the process of instantiation)? Of course there is abstract theory as well, but that's just as true in physics

Newell and Simon had some thoughts: "We build computers and programs for many reasons. We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about... the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature."[0]

The fact that digital computation is a new process doesn't make it "unnatural", it might be argued; some also contend computation takes place not merely in digital computers but much more generally, in which case distinctions between computer science/cognitive science/physics blur

Agree with your broader point, though. I'm not aware of any consensus on the epistemological or ontological status of computer science, or on its relation to the other sciences. It seems (to me) subject to many of the same philosophical questions that dog mathematicians, re: discovery vs. invention, the uncertain reality of various abstractions, generalizability, etc

Likewise agree that consideration of the methods employed in computer science can be fruitful, in particular if the goal is not so much to establish once and for all which category CS falls most naturally into, but simply to stimulate critical thought about the fundamental questions

[0]: https://dl.acm.org/doi/10.1145/360018.360022


I meant natural in the sense of originating from nature, specifically as opposed to something built by people. The fact that digital computation is a synthetic construction of humans is what makes it “unnatural”, that it’s new is just a byproduct humans having invented it recently.

I’d agree there are ways that we can observe computation as a scientist and form hypotheses and perform experiments, especially if, for example, I write a program I don’t fully understand and don’t know how to predict the behavior of, or more maybe much more commonly when I observe software written by other people.

Thinking about the analogy to telescopes, the implication is that computers are an instrument for measuring something. Telescopes measure things about planets and stars, physical things that occur in nature. But what exactly do computers measure if they’re to be considered a measuring device? It’s fun to think of a computer being a physical device that measures pure logic; we can physically observe something that doesn’t occur in nature.

On the other hand, I’m hesitant to not draw some kind of line between CS and the hard sciences like physics, chemistry, biology, because there seem to be real differences between them. (I was going to point out examples, but realized it’s fundamentally tricky to nail down and I’d be setting a trap for myself. ;)) Yes I agree the philosophy of where CS lands, and what CS really is, does land in the same ambiguous camp as mathematics (probably because CS and math both truly are in the same category of abstract logic, not directly tied to physical observations.) Maybe more useful and abstract tools are more difficult to categorize precisely because they are used as part of all the sciences and arts...


> This is a strange claim since the entire field was founded upon the investigation of potentially (and often actually) infinite computations.

To me, that is not that surprising, although it's a good point.

My view has to do with history of mathematics. People were fascinated with infinities long time before they considered that large but finite systems can be also interesting. I think applications of mathematics, mainly geometry and physics, are responsible too.

The development of more finitist taste in problems (somebody else mentioned the constructivism, which I think is fitting) came with the practical need to do computations and developing algorithms.

So I am not that surprised that one of the early forays into theory of computation are through the lens of infinite, rather than finite.

> Most importantly though, my problem with this kind of discussion is that the question itself is meaningless.

Of course, I forewarned that it's just my view, and you're free to ignore it.

Look, partly why I mention it, it seems rather surprising to me; I would consider infinite structures to be more complicated, in some sense; yet, in the history of mathematics (which lately includes CS, as a study of large but finite), these were studied first. There was nothing that would prevent Ancient Greeks (or Euler) from discovering, say, lambda calculus, or how to do sorting efficiently. Although, it seems in many fields we progress from more complicated to simpler methods, in some way. But I think it's partly precise because the finite is often considered uninteresting by mathematicians, it was overlooked. And that's the philosophical point I am trying to emphasize. Different fields of math perceive (in the way they treat them) the same structures differently, and I gave an example of natural numbers. Another example is the notion of the set cardinality, in most areas of mathematics people only care about countable/uncountable distinction.


> “ Not everything can be classified into neat " X is a Y" relationships.”

I’m with you on skepticism of the x-is-y relationship; however, I read the comment as comparing math versus computing academics.

Then, the so-what answer would be informative for the neophyte or youth who is interested In computing but struggles with mathematics instruction. Right? That’s a real thing in education.

In fact I find this to be the principal benefit of online MOOC courses. You can compare styles of instruction, and pedagogy from major universities from across the US (and internationally).


> computer science is a branch of mathematics that deals with large but finite structures (so they need an algorithmic description)

Another way to put this is that computer science deals with mostly constructive mathematics (more precisely, mathematics that uses intuitionistic logic, the kind that is natural to most programmers and computer scientists anyway). For instance, when you prove the fundamental theorem of arithmetic, you actually can translate that into an algorithm for factorizing numbers into products of powers of primes. And the converse holds too, an algorithm is a proof! If you can give me an algorithm that, given a number n, always produces a prime bigger than n, then that actually witnesses the infinitude of primes.

Constructive methods are everywhere in CS, for instance, to prove a proposition P, it's possible in classical math to say "assume (not P) is true, then derive a contradiction, hence P", however that would be really unnatural in CS! You never hear "I want to show an algorithm to solve P exists, let's suppose it's not computable, ... contradiction!", because you don't end up with an algorithm at all (what you proved instead was that it's impossible for an algorithm to not exist, without saying what it is). Likewise, you if want to show some number/program/data structure has the properties you care about you almost always give the description explicitly.

For more information, I'd say that type theory is a great intersection of math and computer science in a way that's quite accessible to programmers, since we're already used to this kind of thinking, even if we weren't explicitly taught it.


Although the standard proof of the incomputability of the halting problem assumes it is computable and then reaches a contradiction. And that's more or less the central proof of everything to do with computability.


That's still constructive. This can be a confusion because there's actually two kinds of contradiction proofs:

  to show ¬ P, assume P then derive a contradiction (false, ⊥)
i.e., ¬ P := P -> ⊥. This is actually just fine and constructive, it's just how you prove a negation.

OTOH, there's another kind of contradiction proof:

  to show P, assume ¬ P, then derive a contradiction (false, ⊥)
Written with function notation, this becomes P <-> (¬ P -> ⊥)

But if we unfold the definition of negation, we have that

  P <-> ((P -> ⊥) -> ⊥)
The left to right direction holds classically and constructive, but the right to left direction uses double negation, which can is equivalent to the law of the excluded middle, AKA classical reasoning.


Sometimes, the strategy of proving ~P by showing how P entails absurdity is called "proof by negation". The phrase "proof by contradiction" is then reserved for the strategy of proving P by showing how ~P entails absurdity.

On that usage, constructivists are happy with all proofs by negation, but not generally happy with proofs by contradiction.


I find relevance logic much more intuitive than intuitionistic. Intuitionistic has still some weird assumptions that make it fit a lookup table like classical logic.


This is an interesting view that captures a substantial portion of computer science including also programs and proofs with a short and clear criterion.

However, a key drawback of this view is that it undersells the linguistic aspect of computer science which is manifested in the search for suitable programming languages.

I think it is justified to regard the design of programming languages as a core area of computer science, and that design space is not finite.


Programming languages are not full linguistics, at least not yet. We focus primarily on syntax, semantics and pragmatics. All of this, though, is firmly rooted in mathematics, defining grammar as expressions and mathematical relationships. This then enables formal mathematical proofs where we can reason about outcome. I don't know if there exists such search that you mention, or it is more of an optimization of language features to problem space. Perhaps in a future where we're able to program our machines by having a conversation with them...


I once read an article suggesting that reading computer code does not activate the regions of the brain that are involved in language processing:

https://news.mit.edu/2020/brain-reading-computer-code-1215

This may indicate that the sampled programmers did not program in the way it can be done for example with declarative languages, i.e., primarily as a linguistic activity where we describe what we know about the task.

I also like the description in Programming as Theory Building by Peter Naur:

https://pages.cs.wisc.edu/~remzi/Naur.pdf

The feasibility of this approach may depend on the programming languages one uses.


Thanks for the articles. I definitely don't do it the same way I process language. To use the same example- I describe a task unambiguously, which makes it a translation from written text to memory regions or execution paths. It gets more concrete than just what I know about the task. A better parallel would be to writing mathematical formulas. When I process language, I believe, more of my brain is engaged in empathy, in trying to match context, in allowing ambiguity, storing ideas and concepts to be disambiguated at a later point.


Yeah I remember that I started reaching my first CS successes once I knew enough of the mathematics and properties of the hardware to "think like a machine" and was trying to explain some of my peers still struggling how to do a more step by step execution in their mind.

In hindsight it's clear that you dont tell a story when you program a computer or not the way I'd describe my day or teach my kids about a phenomenon. It feels more like unrolling a tape and executing a receipe in the most simple way you can after you built a mind model of the machine. You mimic an actor maybe, rather than imagine an event ?


I have a very different experience; when I code, it feels similar to mapping out a conversation or constructing a narrative, and this is how I teach it to my students.

I've met people who view it as mathematics or philosophy or a bunch of other things; in the end, I think there is a wide range of valid mental models for how computers function and therefore how to communicate with them.


Did they compare identical information expressed as written language or programming language or did they compare messages that contain logic expressed via programming languages vs messages that don't contain logic expressed via written languages?


A substantial portion of theoretical computer science.

There are huge swaths of the field that deal with things like machine learning or distributed systems or computer architecture that don't really fall under that categorization.


Please explain how Dijkstra's _Go To Statement Considered Harmful_ (see https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p... for the text) is part of mathematics. Then Knuth's famous reply _Structured Programming with Go To_ (available at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.103...) and then the much later follow-up at https://cis.temple.edu/~ingargio/cis71/software/roberts/docu..., _Loop Exits and Structured Programming: Reopening the Debate_.

When you've disposed of those, read the less well-known paper _Programming as Theory Building_ by Peter Naur (see https://pages.cs.wisc.edu/~remzi/Naur.pdf for a link).

Are there computer science papers that could fit within mathematics? Yes. Are there important computer science papers that clearly don't? Also yes.


I agree (as somebody else pointed out) these are more about software engineering (and I don't see a contradiction in being both, many people we consider physicists today were also engineers in their time).

Although I will point out, there is https://en.wikipedia.org/wiki/Structured_program_theorem which is a mathematical statement. Whether to actually follow that result when building software is a matter of engineering taste, but the theorem tells you, at the very least, you can do so.


Let's see. Dijkstra introduced the A* graph search algorithm. Knuth wrote THE classic algorithms book. Peter Naur is best known for his work on parsing (he is the N in BNF notation).

Even if you try to draw a distinction between computer science and software engineering, most of the people writing those papers were pretty squarely on the computer science side.

But I don't draw that distinction. ALL of them were tenured professors of computer science. ALL of them were published in journals of computer science. There is no valid basis on which you can draw an artificial line between computer science and non-computer science, and put those outside of computer science. Doubly not if your goal is to say that computer science is a subset of mathematics rather than its own thing.


Your claim (as you explain elsewhere) seems to be that because discipline of software engineering exists and is related to computer science (and there is a fuzzy boundary between the two), we cannot think of computer science as a subfield of mathematics.

I honestly don't see how that follows. Does the fact that writing LAPACK required software engineering mean that numerical linear algebra is not a part of mathematics?

My basis for grouping them together is because of the subjects under study and research methods used. You can always claim that two different fields are different, but I think making arguments for some relationship is more useful.


My claim is not that software engineering is related to computer science, it is that software engineering IS PART of computer science. And being part of it, means that computer science is not merely an applied subfield of mathematics. A truth that mathematicians and computer scientists accepted decades ago when computer science split off from mathematics into its own department.

AS for your example of LAPACK, "The existence of evening, my dear Boswell, does not mean that day is the same as night." Numerical linear algebra is on a boundary between mathematics and computer science. There is a large boundary between math and computer science. But then again the same can be said of, say, math and physics. But by the time you're worried about how to use caching effectively, and make that work in a distributed computation, you're pretty far on the computer science side.

Also your idea of grouping them because of research methods is problematic at best. For example machine learning research has a highly experimental flavor that does not fit in mathematics at all. Quantum computing contains a lot that is closer to a field of applied physics than of mathematics. And so on.


Those papers you mentioned are closer to the field of Software Engineering.


This is basically the True Scotsman argument.


On what basis do you claim that software engineering is not part of computer science? Those papers were all written by professors of computer science and published in journals of computer science. All were cited many times by other papers written by other professors of computer science in other journals of computer science.


Software engineering is taught as a part of the field that's called computer science, but the original question was about whether the field called computer science is actually a science.

I guess the difference in that regard would be that (software) engineering doesn't necessarily follow the formal scientific method of formulating a hypothesis and then testing it experimentally. That would, in some sense, make it distinct from science. The same would practically apply to many areas in computer science that are studied experimentally, not just software engineering.

To elaborate on that a little, (software) engineering does build on experience and empiricism, as well as analytical thinking, but in practice it may be more in the form of "lessons learned" than in the form of hypothesis testing.

That doesn't make it any less valuable, or even any less valid as an academic area of research. It just, in some sense, makes it possibly distinct from the sciences.

Of course there are also problems in CS that can actually be studied with the scientific method, but I think amelius might have meant that publications such as Dijkstra's and Knuth's papers on goto would be more in the "lessons learned" category than in the "results from the scientific method" category. They would thus not really make computer science a science even though they're not really in the "CS as a part of math" category either.


The original question wasn't whether computer science is a science, it was whether computer science is a branch of mathematics.

My position is that computer science is neither a science nor is it a branch of mathematics.


I don't think there's necessarily a disagreement here. You're talking about "how" to approach problems and he's talking about "what" the problem space is.


You imply that mathematics isn't concerned with notation?


At Dartmouth College, computer science started as a branch of the Mathematics Dept. It was eventually made it's own department (located in a former clinic for the mentally ill, Sudikoff Laboratory which some see as ironic). Now Dartmouth is building a new addition to the engineering building (Thayer School of Engineering) and the Math and Computer Science Dept's will move there. I believe the thinking is that those dept's collaborate so much that having them located in the same complex will be helpful. Much of computer science research is more about applications of computers rather than research into computing itself, though some pure computing theory is still being worked on, and everything is becoming cross disciplinary across many fields. One CS project I worked on was designing a smartwatch for health apps. It involved electronics engineering, operating system development, health applications development, sensor data processing, medical research on senior frailty, gesture UI design, UI usability research, secure wireless data transport and collection and more. It was run by a CS prof though. If you go into CS you can end up working on anything and everything. Another CS robotics project involved electronically herding cattle. Where do cattle fit into CS theory? As end users of computers? :-)


At Harvey Mudd College, when I was there in the 80s, the Computer Science department was part of the Biology department: They had one prof who did CS exclusively, one who did Biology exclusively, and one who did both. The “real” departments, Mathematics, Physics, Chemistry and Engineering all offered the option to do a CS option within them, but one of the cool things about Mudd of that era (I’m not sure how much of this has survived since then), was that you didn’t really do a specialized degree in Math, Physics, etc. but got a good broad education across the discipline.


This is good thinking and I am sad to hear that it is unconventional.

And now I'll contradict the part of what you said about finite structures: At a theoretical level, the field deals with infinite structures, namely the Turing Machine tape and infinite time. We use finite Computing Machines to simulate a finite section of that tape in a finite time.

Maybe we should call it Turing Machine Science, because we use computers to study the behavior of programs in Turing Machines, just as astronomers use telescopes to study the behavior of atoms in stars and particle physicists use particle accelerators to study the behavior of elementary particles. We will never touch those particles, stars, or Turing Machines, but we can know them, hence the science.

Software engineering is like using one's understanding of the emissions of the sun to design better solar panels. Very practical, but you don't use your understanding of gravity-driven fusion reactors every day.


When you mentioned infinite structures, I thought you'd bring up the idea that our goal to automate often pits us against problems defined as collections of infinitary instances. I don't think the potential infinity of tapes and running times poses as much of a concern by comparison.

Maybe computer science is about giving, to borrow a little bit from Hilbert, finitary representation to infinitary structures. (Finitary representation with other properties of interest, such as tractability and whatnot, of course.)


> Maybe we should call it Turing Machine Science

Nope. Turing machines are arbitrary and rather unmathematical. The lambda calculus is a much better computational formalism, more mathematically grounded and oriented, with far more direct practical applications.


This reads like total nonsense to me. Why would Turing machines be 'arbitrary and unmathematical'? What's 'unmathematical' about them? They can be formally and precisely described and I don't know of any mathematician who wouldn't accept TMs as a sound definition.


Turing machines are arbitrary in that they don't derive from some mathematical basis. You could easily reformulate the abstraction in many different ways, replacing "tape" or a "read/write head" with some other quasi-physical concept that don't normally appear in mathematics. Those concepts are spurious, just provided for their analogy to real-world machines.

You can provide formal descriptions for many things - the Perl programming language, for example. I would also call that language unmathematical. You can study such objects mathematically, but they are essentially external objects of study which one is using mathematics to make more tractable.

In contrast, Curry and Howard discovered direct correspondences between logic and lambda calculus. For example, intuitionistic natural deduction is isomorphic to typed lambda calculus. The internal languages of Cartesian closed categories are lambda calculi.

If Church hadn't discovered lambda calculi, they would have eventually been discovered via one of these correspondences.

Wikipedia provides a partial list of these correspondences at https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon... :

* Girard-Reynolds System F as a common language for both second-order propositional logic and polymorphic lambda calculus

* higher-order logic and Girard's System Fω

* inductive types as algebraic data type

* necessity in modal logic and staged computation

* possibility in modal logic and monadic types for effects

* The λI calculus corresponds to relevant logic.

* The local truth (∇) modality in Grothendieck topology or the equivalent "lax" modality (◯) of Benton, Bierman, and de Paiva (1998) correspond to CL-logic describing "computation types".

As such, lambda calculi are inextricably embedded in mathematics and logic, and their core features (ignoring superficial choices such as syntax) are a discovery, rather than an invention. We can't change the equivalences described above, they are facts that arise from the formalisms developed to address subjects like logic and categorical analysis. The same is not true of Turing machines.


> They can be formally and precisely described and I don't know of any mathematician who wouldn't accept TMs as a sound definition.

Just because something can be formally and precisely described doesn't necessarily mean it's (aesthetically) "mathematical". Another example that comes to mind now is the notion of an applicative functor (Applicative in Haskell) — sure, you can give the categorical definition for it but it's quite an oddly specific structure (if anyone reading this knows where they arise in non-FP contexts, I'd really like to know!).

So for Turing machines, maybe a question we might ask is, "how can we ensure Turing machines always terminate?" This is a much harder question than say, imposing a type system on untyped lambda calculus which unconditionally makes all terms terminate, while still retaining the ability to do recursion and useful work.

A good yardstick to judge the "mathematicality" of a construction often amounts to looking at its compositional properties. That is, could you re-use the concepts over and over again, and could variants of the structure be uniformly described? While Turing machines and lambda calculus are both equivalent and can be both formally described, one does win over the other in terms of having a simpler, more compositional structure.

This is quite subjective, and in any case it feels remarkable that both Turing and Church came up with their constructions to describe computation formally!


> Just because something can be formally and precisely described doesn't necessarily mean it's

> ... (aesthetically)

The 'aesthetics' part is an interesting argument I didn't see, and yes, I agree that a particular concept arising in different areas of math is 'aesthetically pleasing'.

> ... "mathematical"

That's the part I have a problem with, mathematical objects are (more or less) exactly those that can be precisely and formally described. "Lambda calculus is a more elegant mathematical object" is not a statement I have a problem with, "Turing machines have a lesser status as mathematical entities", on the other hand, is sort of weird.

> So for Turing machines, maybe a question we might ask is, "how can we ensure Turing machines always terminate?" This is a much harder question than say, imposing a type system on untyped lambda calculus which unconditionally makes all terms terminate, while still retaining the ability to do recursion and useful work.

You can bound the number of steps or the size of the tape :) That's not a "well, actually", the point I'm making is that other computational models (TMs, pointer machines, RAMs, counter machines) are more natural formalisms to think about some problems. Sure, logic/proof theory is intimately related to lambda calculi and it would be extremely unnatural to formulate the same ideas using Turing machines, but for things like analysis of algorithms, complexity theory or numerical analysis it would be similarly unnatural to use lambda calculus as the computational model instead.

They certainly aren't lesser forms of math, even though one might find them less aesthetically pleasing.

> ... it feels remarkable that both Turing and Church came up with their constructions to describe computation formally!

It is. It completely blew my mind when I first learned about that fact. If computation can be exactly described by different formalisms resulting in the exact same set of computable functions, then it must be a very fundamental feature of how "things" work!


> That's the part I have a problem with, mathematical objects are (more or less) exactly those that can be precisely and formally described. "Lambda calculus is a more elegant mathematical object" is not a statement I have a problem with, "Turing machines have a lesser status as mathematical entities", on the other hand, is sort of weird.

I agree as well. I don't think Turing machines have any lesser status (it's literally equivalent to lambda calculus, after all!)

> but for things like analysis of algorithms, complexity theory or numerical analysis it would be similarly unnatural to use lambda calculus as the computational model instead.

That's a good point as well. Because of the nature of hardware, neither lambda calculus nor Turing machines are very good fits, so RAMs and pointer machines essentially take over.


> I don't think Turing machines have any lesser status (it's literally equivalent to lambda calculus, after all!)

"Equivalence" has a specific meaning here, though. The term "Turing tarpit" was invented to point out the limits of that equivalence, and any attempts to define computations for Turing machines quickly run into that.

The sense in which I consider Turing machines to be "rather unmathematical" is closely related to this. You can write useful mathematical proofs in lambda calculus - as I've pointed out elsewhere, there are automated proof assistants based on this fact. There's no such equivalent for Turing machines. Theoretically, there could be, due to Turing equivalence, but in practice, no-one wants to exploit that.

Given one formalism that has actively been used for such mathematical purposes, and another that has been actively avoided, I call the latter rather unmathematical by comparison to the former.

People can quibble with my word choice, but it's describing a real, measurable phenomenon, the effects of which have played out predictably over the last 70 years.


> Just because something can be formally and precisely described doesn't necessarily mean it's (aesthetically) "mathematical".

Thank you, this is exactly what I was getting at.


Why would you think that? I believe I’ve read that even Church himself said that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about. I’m sure one can prove everything proved for Turing machines for lambda calculus, but I disagree with your statement that it is more mathematically grounded. It may be true in a syntactic form, but definitely not in a mathematical structure one.


> I believe I’ve read that even Church himself said that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about.

I'd be interested in a source for this.

Lambda calculi are used as the basis for several functional programming languages, which seems to argue against the idea that they're less easy to reason about.

They're also used as the basis for proof assistants such as Coq. Coq is based on the calculus of constructions, which is a typed lambda calculus. Again, this would be a mystifying choice if lambda calculi are difficult to reason about.

> I disagree with your statement that it is more mathematically grounded.

I've provided more support for my position here: https://news.ycombinator.com/item?id=27334163


According to this:

https://plato.stanford.edu/entries/church-turing/

"In his review of Turing’s work, Church himself acknowledged the superiority of Turing’s analysis of effectiveness, saying:

computability by a Turing machine … has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately. (Church 1937a: 43)"

Moreover, Gödel himself (as well as other mathematicians) found Turing machines and Turing's thesis to be more persuasive:

Gödel also found Turing’s analysis superior. Kleene related that Gödel was unpersuaded by Church’s thesis until he saw Turing’s formulation:

According to a November 29, 1935, letter from Church to me, Gödel “regarded as thoroughly unsatisfactory” Church’s proposal to use λ-definability as a definition of effective calculability. … It seems that only after Turing’s formulation appeared did Gödel accept Church’s thesis. (Kleene 1981: 59, 61)

Gödel described Turing’s analysis of computability as “most satisfactory” and “correct … beyond any doubt” (Gödel 1951: 304 and *193?: 168)."

For what it's worth, I do think you've started an interesting discussion!

You're allowed to have an opinion that you prefer lambda calculus over LCMs (logical computing machines -- a better term for Turing machine's that Turing favoured) for conceptualizing and reasoning about computation mathematically.

Myself, I find Conway's game of life to be the superior framework over both lambda calculus and Turing's Logical Computing Machine :-P


Thanks for the reference.

However, those quotes are not saying "that Turing machines are a more elegant basis for computations, since they are much easier to mathematically reason about." I still consider that statement to be false, and I've substantiated that in my comments.

The quotes from Church and Gödel are saying that Turing's formalism was the more helpful in making the case that it had captured the notion of effective calculability. That's understandable - it's much like e.g. Cantor's diagonal, in that it makes its subject very concrete.

But that doesn't tell us anything about the usability of the formalism as an actual mechanism for computation, or analysis of computation. In that respect, lambda calculus has proved far more useful, as shown by the examples I mentioned, and many more. Another example is denotational semantics, which maps programming language semantics to lambda calculus.

In fact, one of the inventors of denotational semantics, Dana Scott, developed the first non-trivial model of the lambda calculus, in terms of complete lattices. That model addressed the concreteness issue, albeit decades later, in the early 1970s. It's possible Gödel, Church etc. might still have preferred the Turing tape as an intuition-friendly proof for effective calculability, but it would no longer be possible to reasonably make Gödel's "thoroughly unsatisfactory" claim about the lambda calculus.

Coming back to denotational semantics, I'm pretty sure no-one has ever provided a non-trivial language semantics in terms of a Turing machine - and if you wanted to do that, one of the easiest ways to do it would probably be to implement a compiler from lambda calculus to Turing tape, and generate the Turing representation automatically from that.

More generally, my position could be simply refuted with counterexamples of tractable Turing machine solutions to any of the various problems that lambda calculus has been used to address, like programming language implementations, programming language semantics, duals of logical systems, and proof assistants. To my knowledge, no such examples exist, and the reason for that is because what I'm saying is a fact, not an opinion.

> Myself, I find Conway's game of life to be the superior framework over both lambda calculus and Turing's Logical Computing Machine :-P

I'll agree that the game of life is only slightly less useful than the Turing machine as a model of computation!


> The quotes from Church and Gödel are saying that Turing's formalism was the more helpful in making the case that it had captured the notion of effective calculability. That's understandable - it's much like e.g. Cantor's diagonal, in that it makes its subject very concrete.

This is what I primarily meant with my controversial statement, sorry I didn’t articulate it well enough. Of course lambda calculus is very useful and has plenty of applications, never meant to say otherwise!


I think you've demonstrated far more familiarity with the subject matter than me (and my own understanding is admittedly pretty basic), and just wanted to say I really appreciate your thoughtful response. You're right that those quotes don't really fit for what you were asking for evidence of.

I also agree that the absence of such a refutation is a pretty solid argument that your position is simply a fact and not an opinion as I argued.

Cheers friend!


Turing machines are terribly inefficient, though. They may be easier to reason about than lambda calculus, but not good for practical computing purposes. Their value was in proving that logical and mathematical reasoning could be mechanized with an automatic device, something that had not been clear until then.

Computing science cares a lot about building efficient processes. Thus to create real working programs, a better basis is a combination of lambda calculus for defining mathematical structures and (Von Neumann based) agent-based models for defining stateful processes.

Modern programming languages are evolving to be capable of representing either model, to adapt themselves to the style more suited to the problem at hand.


Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?

For seeing whether a given statement holds true, minimizing complexity is the most important.


> Efficiency is not important at all when you talk about a mathematical proof.

Precisely, and that's the main difference between computer science and the rest of mathematics.

Typically CS cares about the process to find a result, and not just its value nor the possibility or impossibility to find it.


> Typically CS cares about the process to find a result, and not just its value nor the possibility or impossibility to find it.

Typically, programmers care about the process to find the result. Some programmers are computer scientists. You can not conclude that all computer scientists are programmers, so your statement is based on logical fallacy.


Computer science still has computational complexity theory and analysis of algorithms as sub-fields, with complexity classes and Big O notation. These very much care about the process to find results, which was the basis for my statement :-) The fallacy is on your interpretation of what I said.

There are results in theoretical computer science that don't care about the efficiency in the process; the most essential are the computability theorems exploring what parts of mathematics can be computed, and what we mean by computation anyway.

But the most essential part -the halting problem- was resolved long ago, so the lion's share of applications of computing science is in finding practical ways to use computers, which again needs to take efficiency into account.


> Efficiency is not important at all when you talk about a mathematical proof. Do we really care about a proof using induction taking n or n^2 operations?

Efficiency and tractability of representation is also an issue, though, and mathematicians (and programmers) do care about that, a great deal. That's why the lambda calculus has been used as the basis for proof assistants such as Coq, whereas Turing machines have not.


This whole debate is always so bewildering. Programming paradigm fanboys get into heated arguments about which model is the "best" one, but actual computer science research uses myriad different models of computation, usually endeavoring to select the one that is most convenient for the given purpose.

Sometimes that could mean using the lambda calculus, particularly in study of language theory and type systems. Other times that could mean some sort of black box model, such as when proving lower bounds for solving problems using specific operations (see e.g. the sorting lower bound). Yet other times, like when establishing the ground-zero of some new variety of computational hardness, I can't think of many more suitable models to cut up into pieces and embed into the substrate of some other problem than those based upon Turing machines.


> Programming paradigm fanboys get into heated arguments about which model is the "best" one

My original comment certainly reads that way, but my intent was really to point out that it doesn't make sense to privilege the Turing machine model in the study of computation. I wrote more about why in this comment: https://news.ycombinator.com/item?id=27334163


Well, computing science studies programming paradigms; so defining them and analyzing what makes them suitable for what purposes is pretty much within its scope.

As I said above, it may very well be that the best usage for Turing machines is using them in mathematical proofs; where the efficiency of the computation is not a concern.


Really the best usage of all the computation models we're discussing here is using them in mathematical reasoning. If you're looking to "create real working programs," then a better basis is probably going to be some combination of actual industry-grade programming languages and actual CPU architectures.

This response might come off as a little facetious, but seriously, I think the idea of "founding" industrial computing languages/platforms upon theoretical research models of computation misunderstands the relationship between theory and practice. There is a relationship for sure, the research on these models usually does want to translate into real-world implications somehow, but your functional programming language is not the literal lambda calculus.


Certainly practical programming languages are not a one-to-one implementation of a theoretical model, but these models do create families of related languages that keep a close relationship and are separated from languages based on a different model.

Each time a new theoretical model is created to represent a particular programming problem, entirely new languages are created to ease the practical approaches of building systems for the underlying problem.

And it is worth keeping track of which models are good for which problems. So no, theoretical models are not good just for doing math with them, also for guiding practical usage.


> They may be easier to reason about than lambda calculus

That was a claim someone made, but it turned out to be a misunderstanding, and is incorrect.

I've covered this here: https://news.ycombinator.com/item?id=27338055 (see the parent comment for quotes from Godel & Church that I'm referring to.)


If you like, we can generalize and call it "computation science" to avoid the focus on the tool/machine in favor of the process.


"Computing science" is more commonly used as an alternative, but yes, that approach makes much more sense than focusing on some particular abstraction.



This by the way is the most arrogant comment I've seen on this site.


What about my comment makes it so arrogant?

In case it helps, I've provided some substantiation for my position here: https://news.ycombinator.com/item?id=27334163

I should also clarify that I didn't intend to argue that lambda calculus should be the only way of understanding computation, but rather that it doesn't make sense to privilege the Turing machine model, as the comment I was replying to suggested.

The Turing machine model is arbitrary, and it is unmathematical in the sense I've described in my comment linked above. A different culture (or species!) would be likely to come up with a different computational machine-like model, but any culture that develops formal logics would be likely to discover the lambda calculus as a consequence of that.


> Turing machines are arbitrary and rather unmathematical.

Thus proving the point that a field that studies them cannot be considered a branch of mathematics.


That might be true if there were no more natural mathematical representations of computation. But as I've pointed out in another comment (https://news.ycombinator.com/item?id=27334163), there is such a representation - the lambda calculus.


There are large and finite structures in mathematics that would be very odd to call computer science. Take group theory and, say, the monster group [1].

[1]: https://en.wikipedia.org/wiki/Monster_group


I see nothing odd in calling that computer science. It seems more like hardcore computer science.


And what then couldn’t be called "hardcore computer science?"


I believe the core difference between comp-sci and other math is in taking into account the efficiency of the representation and/or transformation.

Thus defining the Monster group is discrete mathematics, but not computer science (or not directly). Finding parts of the Monster groups that can be represented and manipulated by computers would be computer science.


> computer science is a branch of mathematics

Subject boundaries are arbitrary, but they have a practical implementation in terms of university CS departments.

Some CS departments came out of math departments, and some universities have a "Department of Mathematics and Computer Science. The theoretical side of CS does seem like a branch of mathematics.

Other CS departments came out of EE departments, and and some universities (MIT, Berkeley) have a "Department of Electrical Engineering and Computer Science." The practical side of CS does seem like a branch of engineering.

A number of stand-alone CS departments (Stanford) still seem to be part of the school of engineering, so my vote is for CS as an engineering discipline with a theoretical basis in mathematics (perhaps like information systems or signal processing.) I also like the idea of Caltech's "Computing and Mathematical Sciences," as it seems to bring a lot of computing and applied math under one roof.


I always tell my students that computer science the special subset of math that that is coincidentally "easily" executable on a computer.

This subsumes finite structures, parseable Problem descriptions, structured formulas, formalized algorithms, etc.


Before there was a CS major at many universities you would major in mathematics if you wanted to work with computers.

John Kemeny majored in mathematics and taught “Finite Mathematics” while a professor. He saw the application of BASIC not as a new field but as a way to simplify computers so that it wasn’t only mathematicians and scientists who could program them.


Dartmouth is an example of a CS department coming out of the Math department rather than the EE department.

I tend to think that basic (though probably something like Python/numpy rather than BASIC proper) programming fits well into many math courses. The fact that modern TI calculators can run Python seems to mesh nicely with this. (And classic calculators with BASIC also have a long history.)

Math courses often in introduce algorithms for arithmetic and algebraic computation, so some coverage of algorithms as a concept seems to fit as well.

> BASIC not as a new field but as a way to simplify computers so that it wasn’t only mathematicians and scientists who could program them.

This is a fantastic vision. Of course mathematicians and scientists also benefit from user-friendly languages like BASIC or Python. But the idea that computer programming (and computing in general) could be helpful to undergraduates majoring in humanities and social sciences, and perhaps to the public at large, was probably still a fairly radical idea in the 1960s! Trying to make that happen by creating a programming language that first-year students could learn in an afternoon (or so) was/is a remarkable step toward making that vision real.


Computer science research is mix of (at least) mathematical, engineering, and scientific traditions that varies by subfield. Among the subfields I'm most familiar with, theoretical algorithms are mostly mathematical, while algorithm engineering is closer to engineering. Data compression has both mathematical and engineering work, while bioinformatics is a mix of all three traditions.

It's not always clear what is computer science and what isn't. For example, you can find both mathematicians and computer scientists in theoretical computer science. In bioinformatics, you often see similar work from people with CS and life sciences backgrounds.


Computer science has two origins, born from electronic engineers who studied the ways to build the physical devices required for computation; and mathematicians who studied the best way to represent the desired computations as configurations on such devices. This double soul still lives as the distinction between hardware and software (though the line can be blurred, with programable microchips like FPGAs or the direct implementation of algorithms on hardware in ASICs).

However, there's a general and unified way to define anything computing-related, unifying both traditions: seeing computer science as studying the automatic processing of symbols. I.e., anything that the human mind treats as a symbol with a meaning, that can be represented in physical devices and transformed in a different set of symbols through mechanized processes.

This definition widens the scope of comp.sci beyond its origins in representing calculations of physics and maths, to include other fields typically taught in the degree but that people don't think of as computer science: natural language processing, design of viable user interfaces (human-computer interaction), design of adequate programming languages apt for different problems.

The most general view of this definition encompasses disciplines far removed from engineering, but which are nonetheless used to expand its frontiers: aristotelian ontology was used to invent object-oriented programming, and is still used to explore the semantic web; or semiotics to explore what ideas can be represented and processed automatically, and the best way to build user interfaces tailored to the way we think.


Yes. Indeed in much of the world outside the USA there isn't even such a label as "computer science"; the stuff that's taught as "computer science" in the USA is taught under some sort of "math" label.


I studied it under the name of "Informatik" (Germany), which seems to be the most commonly used word in Europe.

The Wikipedia article for "Computer Science" has relevant information, quote (some line breaks added):

  In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested.

  In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek.

  Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
https://en.wikipedia.org/wiki/Computer_science


In Portugal, there is some time to spend with such matters during the 5 year long that an Informatics Engineering degree takes (certified by the Engineering Order).

Since Bologna, it got levelled to Masters, but most still take the 3 + 2 years, not to be left behind the older generation that had 5 + 2 for Master.

So plenty of time to go down into the subjects that USA consider computer science.

However for those that want to really go down the rabbit hole of computer science as seen on USA, the appropriate degree is Applied Maths into Computing.


As seen on USA? European CS degrees are generally more comprehensive because we do all the humanities already in high school which leaves more time for math and it is more common for people to continue with a masters degree.


Well, from what I know about them anyway.


In Spanish it' s "Ingeniería informática" (Informatics engineering), you would call it CS Engineering.


> Good example is number theory, which philosophically considers all natural numbers, regardless of size, to be equal type of objects. In computer science, the (although fuzzy) magnitude of numbers plays much bigger role.

Somewhat similarly, I have a degree in number theory / modal logic systems that was issued by the philosophy department at my university. The concepts are similar to a CS degree except you get really good at describing the problem because philosophy degrees usually involve a lot of writing.


Legitimate computer science should be about assembly and C for the most part. Knowing mathematically how the 1s and 0s are outputting through the machines circuit.

Nowadays it's just an antiquated term for programmer. The average C programmer if wager is above 40 years of age whereas the average python or web dev dev must be pushing late 20s.


It’s the other way around. Infinity can only be defined by using algorithms. Think about Peano numbers for example.


I think it really depends what part of math and what part of computer science you're talking about. Certainly foundations of computation use finite objects from countable sets... But there is more to CS than turing machines and encodings of numbers.

For example: Computer vision, machine learning, data science, cryptography, etc are all rife with infinite objects! Proof assistant software and SMT solvers can also prove things about infinite mathematical structures from number theory, ZFC, topology, etc.

Pure mathematics also cares about finite algorithms. Every proof is a finite sequence of deductions on finite objects from a countable set... eg a computer program! Other examples include: computing bounds, integrals, roots of polynomials, divisors, bases, fundamental groups, etc. Pure math is full of computation!

tl;dr the line between math and cs is extremely fuzzy.


Working with coinduction. So that’s not a good definition :-)


multiplication is an algorithm.


I've come to believe that ideas/statements such as these are crucial, but neither true nor false. They're a perspective lens. Ideally, we should be able to slip in and out of such perspective lenses. A contradictory statement could be equally true/useful, even (especially?) if held simultaneously.

Avoiding computer science directly... Geophysics is a "telescope science." A typical geophysicist sees themselves as experts in seismic interpretation, the tool they use. When the subject turns to the actual subject (the earth), they call it geology, or rock physics. It's not an idealistic take, or a very scientific one, but it's apparently useful to their work.

To take the reductio ad absurdum head on, I don't think it's totally useless to think of astronomy or microbiology as telescope or microscope sciences. It certainly introduces biases, but it might also remove certain biases and lead to new ways of phrasing a question. It might lead to new lines of inquiry, and is somewhat descriptive of how these fields developed historically. Wasn't Astronomy Astrology, before it was telescope science?

You could go with an intentionally provocative "computer science is not about mathematics" or "not about science."

What if we were to phrase Dijkstra's statement as a question: "Is computer science about computers?" It's not a statement you can approach with empirical falsification. That doesn't mean it's false, or useless. It just means you can't treat it like you would F=ma.


The reverse statement would probably be something like "The core pursuit of computer science is how to best design and harness computers". Which is a reasonably accurate description of many areas within CS including machine learning, distributed systems, programming languages, and computer architecture.


>The core pursuit of computer science is how to best design and harness computers"

This is why I've been quite happy with the Information Science major. I joke that it's "watered down compsi", as it avoids higher level topics like OS design and anything beyond the introductory Data Structures. Instead, the major uses that time to introduce psychology, sociology, and user experience/interface design. As a professional, I've found that focus on "how we interact and best use technology" to be useful.


Well stated.

That would be leaning in to the "telescope science" analogy.


I love this take. I guess metaphors carry a similar purpose. They help draw analogs and expand an idea


For my entire time as an undergraduate, the large top-ten research university I attended offered no courses in programming language theory, nor were these topics woven into the computing curriculum at large. (actually, there may have been a handful of graduate courses, but undergrads were strictly prohibited from joining -- there was a mandatory theory of computation class required for all CS majors, but the topics varied wildly between semesters and the primary lecturer was notoriously apathetic about teaching)

In graduate school, I attended another large top-ten research university and again, no courses on programming language theory. The reason being all the PL faculty had either recently been poached by industry or accepted positions at more prestigious universities.

The result was that my entire computing education felt like watching shadows on the wall of Plato's cave; I was never exposed to fundamental concepts like lambda calculus or to non-standard languages like Haskell or Lisp that might have given a different perspective on computing.

Only recently in the past 1-2 years have I started to fill in the gaps myself, but I can't help but feel cheated. It's pretty crazy that the research-level faculty turnover can prevent thousands of students from being exposed to such an important aspect of computing.

(imagine, for instance, if an entire class of engineers had no option to take a course on heat transfer simply because there was no research faculty available who specialized in heat transfer research)


This also applies to interviewing. Google for instance generally doesn't include anything PL theory related in their interviews, even though it would often be more relevant to the work than random dynamic programming problems. As a result of this they produced languages and frameworks like Go, Dart, Angular and Tensorflow, which display ignorance if not outright contempt for modern programming language theory. This led to the latter two being mostly replaced by React and Pytorch, which are more influenced by language design best practices, to Dart being mostly ignored, and to Go being violently rejected by a significant subset of the programming community.


> As a result of this they produced languages and frameworks like Go, Dart, Angular and Tensorflow, which display ignorance if not outright contempt for modern programming language theory.

This is a non-sequitor. Google hires a lot of PL PhDs (I'm one of them). And for relevant teams there is a "Domain Expertise" portion of the interview. And many of the people working on the languages and frameworks you mention have such background.

You don't like these systems. That's fine. But "Google would do it all differently if they just hired some PL PhDs" is just false.


I really dislike Go, but its creator(s) is more than familiar with language theory, and it is absurd to say otherwise.


I didn't read the GP as making a statement about the _creators_ of Go, but rather about its target users. And Go is explicitly targeted at "Programmers working at Google [who usually] are early in their careers and are most familiar with procedural languages, particularly from the C family."[0] If Google's interviews gateted on the sort of programming language questions the GP mentioned, then Go's target users would have been very different, and Go would likely be a different (better? Worse?) language.

[0]:https://talks.golang.org/2012/splash.article


I know someone that was on the Dart team. He's a PLT expert. In fact that's rather understating it. Don't mistake some pragmatic choices in language designs targeted for business use with ignorance.


What are the best ways to fill in those gaps? I bet many of us never enjoyed those classes in school, or even had a traditional computing education.


For me learning Haskell has been a great way to expose myself to things that were previously "unknown unknowns". I do a mix of coding, for intuition, and reading papers, for theoretical context. The papers written by SPJ et al in the 1990-2000s give a remarkably clear history of how certain functional programming concepts jumped the gap from CS theory to practical implementation in Haskell.

Just as assembly gives as close to a "bare metal" view of hardware as many of us will ever come, languages like Haskell and Lisp give a "bare metal" view of CS theory.


Seeing SPJ refreshed very fond memories for me :-).

I accidentally stumbled onto Haskell in my masters course. On the first day at university I attended a trial class of functional programming[1] course and instantly liked the way professor taught. However, I didn't take the course then and forgot all about it. The next semester I took compilers course and approached that professor for my masters thesis. During a meeting he gave me couple of options one of which was in functional programming domain. This time I took the plunge and went all in. Took the FP course and the professor who taught FP became my thesis advisor. The next two years were the most intense and intellectually satisfying years of my life. Not only did I learn Haskell but also built a compiler for it. I still get goosebumps remembering the rollercoaster ride I had in those two years.

During all that, the book by SPJ [2] became my constant companion. The book was actually out of print but my advisor had a book that was signed by SPJ himself :-). And I promptly photocopied it so I have it with me even to this day.

Suffice to say Haskell, lambda calculus along with SPJ and my advisor have had a lasting (and continue to) influence my life.

[1] https://www.cse.iitb.ac.in/~as/fpcourse/fpcourse.html

[2] https://www.microsoft.com/en-us/research/wp-content/uploads/...


Thanks! this looks really good. The fact that you did your programming class in IIT Bombay further motivates to work hard for my "dreams".


Reading the textbooks that people use in those classes is pretty helpful.

Dan Grossman has a great set of courses on Coursera, and a good number of resources are in those discussion boards.

https://www.coursera.org/learn/programming-languages


I suggest a combination of theory (videos/books/papers) and practice.

For practice; I highly recommend learning Haskell. If you are new to functional programming then learning Haskell is challenging and frustrating but as with any new topic the key is to not give up but keep probing. A good recent development is lot of mainstream languages are beginning to include functional paradigms such as lambda, closures etc., For example Java introduced lambda expressions, Javascript has had them for a while now. But from a first-principles stand point Haskell is as good as it gets so please do learn and code in Haskell if not at work then at least side projects.

For theory I've found following material super useful.

0. This[0] is an incredibly awesome lecture where Phil Wadler takes us on a whirlwind tour of computer science. He talks us through different foundational structures on which almost everything (hardware and software) about computer science is built. I watch this lecture once every few months :-). It helps you build up context and ground various topics.

1. The one and only SICP. Book[1] and lectures[2].

2. Automata theory[3]. This isn't an easy course but gets to the heart of the matter i.e., the meaning of "function" and what how can it be mechanically "computed".

3. Category Theory is where lot of active research is happening in CS theory. This is a very good lecture series[4]. The pace may seem a bit meandering but don't be put off. Bartosz is a gifted teacher and works incredibly hard to disseminate knowledge. For evidence just look at a recent post on HN[5] about an article he published.

[0] https://www.youtube.com/watch?v=aeRVdYN6fE8

[1] https://mitpress.mit.edu/sites/default/files/sicp/full-text/...

[2] https://www.youtube.com/watch?v=-J_xL4IGhJA&list=PLE18841CAB...

[3] http://ce.sharif.edu/courses/94-95/1/ce414-2/resources/root/...

[4] https://www.youtube.com/watch?v=I8LbkfSSR58&list=PLbgaMIhjbm...

[5] https://news.ycombinator.com/item?id=26991300


> For evidence just look at a recent post on HN[5] about an article he published. > [5] https://news.ycombinator.com/item?id=26991300

Ignore this part because the one I referred to is a different person! Though they are both terrific :-)


That post is by a completely different Bartosz.


Darn! All these days I thought they were the same! Thanks for correcting me!


You could find a MOOC (Massive open online course) for pretty much any programming language you want. Here's a pretty good one for Haskell.

https://haskell.mooc.fi


I personally feel the pain of this when trying to hire someone who understands core concepts in PL theory and can help maintain a framework. It's also not the sort of thing I can teach to my Sr devs on the job in under a year when they've never even touched anything like forth or lisp, or even tried to write a parser by hand.


I feel an analogous pain on the flip side of interviewing! I would kill for a job that lets me exercise some of these skills, rather than the brain rot of a typical software job which doesn't demand that one thinks too deeply about things.


I think the only way to do this might be founding your own company and building your own product in Forth and Lisp.


React re-rendering logic requires one to think plenty deep.


Of course, there's interesting work to be done at all levels of the stack. However, not every company will allocate time for solving frequently-encountered problems with creative solutions. Often, it's "ok, what's the minimum amount of time we can spend to fix this problem *right now*?", rather than "how can we make sure we don't have to re-solve this problem again ten more times in the future?"

Not me, but a friend described a whole team of engineers at his company hired to put out the fires caused by using database software that is a poor fit for the application, rather than biting the bullet and either 1) performing a migration or 2) rolling their own solution.


I wish my colleagues would appreciate the beauty of abstraction and PL theory as much as you do. It seems they actively resist any effort in that direction.



In Europe, "Computer Science" is translated with a word that is a mix of "information" and "mathematics". University course names are:

> [...] informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh). In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain. [1]

[1] https://en.wikipedia.org/wiki/Computer_science#Etymology


That is true but in french at least "informatique" means both "computer science" (for example you can study "informatique" at university) and "anything related to computers". "What's your line of work?": a DBA, sysadmin, software developer, computer scientist, etc. may all answer simply "informatique". A series of books for beginners about Excel, Word, Internet, etc. may be called "Collection informatique".

Or if, say, you have issues delivering something on time to a client (no matter the domain), you can always invoke a "bug informatique".

So "informatique" means and is used, at least in french, much, much, much more than just "computer science".

In a way it's even worse than in english: at least "science" is added to "computer" in english and it's kinda self-explanatory. In french everything is in the same basket: from someone doing its Ph.D. to someone having a lesson to learn how to use the mouse... It's all "informatique".


In Germany, anything computer-related is subsumed under "Informatik".

- Students learning to use MS Office in school? Informatik.

- People fixing printers and replacing your harddrive? Informatik.

- System administrators managing a datacenter? Informatik.

- Data scientist applying deep learning techiques? Informatik.

- University professor trying to prove P==NP? Informatik.

Honestly, I envy the Americans for their destinction between "computer science" (CS) and "information technology" (IT). Even if computer science is not really about computers.


It depends on the context. A university degree in Informatik will obviously not be about using MS Office. But 7th grade Informatik is. 10th grade Informatik introduced programming at my school. A trained job “Fachinformatiker für Systemadministration” will be about sysadmin work.


To 90% of the population, it's not "obvious" or we would not need T-Shirts labeled "I will not fix your computer for you".

Even a common programmer does not use any actual "computer science" 99% of the time and your typical sysadmin type probably never knew any. So it's simply wrong and confusing to use the same word for it.


Same applies in spanish (at least were I live)


In Finnish it's "tietojenkäsittelytiede" which consists of:

* "tieto": knowledge but also sometimes information or even data. Computer is "tietokone", knowledge machine (IMO "tieto" one of the worst words in Finnish due to the too broad scope which is why we also say "informaatio" and "data" these days)

* "käsittely": processing or handling

* "tiede": science


> In Europe, "Computer Science" is translated with a word that is a mix of "information" and "mathematics".

I don't think the "ics" in Informatics comes from "mathematics". It is more general: Aesthetics, Economics, Genetics, Linguistics, Physics, Statistics. It just means "the study of".


Wait a moment, the casual etimology of that word suggests (at least in spanish) a profession or a science, not "the sum of math and information" per se.

That same suffix, -atica, is also applied in "the mathematic" as the person (-atic) and "The Mathematics" as the science (el Matemático, las Matemáticas).

So lets say that you are a guy from two centuries ago. Someone tells you "this guy has studied informatics, he is the Informatic of the town". That would sound as if he "is versed in the study of information" rather than Computing.

Also, in Spain, instead of "the Computer" (the thing that computes, calculates), they call it "the Order-ator" (Ordenador, the thing that brings order).


Hmm, where are you from? I'm from norther Spain and those nouns don't suggest anything related to science for me. "El matemático" may be, but just because we associate it with a theoretical field, "Informático" is a practitioner of "Informática", as "Químico" is from "Química".


The computer term in Spain came from the French "ordinateur".

Ordenar has two meanings in Spanish:

- To command someone

- To sort

Both are related. In order to sort some set, you need order. And rules. Thus, "ordenador" has a lot of sense.

But if I was some guy from the 50's I'd translate computer science as "informática electrónica". (Electronic Informatics).


This wikipedia article is not very accurate. In Hungary "informatika" is usually used only for primary/secondary school subjects covering every-day IT tasks like document editing, typing, or sending email. The most commonly used names for CS courses are "számításelmélet" or "számítástudomány" which translate as "theory of computing" and "science of computing".


I would add that what is basically Computer Science curriculum is called “Programtervező Informatika” (roughly software engineering/modeling informatics~=compsci) while what is generally the same as Computer Science Engineering is called “Mérnökinformatika” (roughly engineering informatics).

But you are right that as a job description, “informatikus” is a more basic position than “programozó”=developer/software engineer, etc.


The original name of the university course in Italy was Scienze dell'Informazione, Information Sciences. I remember that my relatives were surprised and they were asking me if I was really about to study journalism (news "informano" / inform people in Italian.) I had to explain that it was about computers. Informatics, not journalism.


It used to be called Data Processing. I guess that confused people? Information Services was used for people who didn't know what data was. Before the GUI it was a lot harder.


In Japanese, "computer science" is directly translated as 計算機科学, which basically means "calculator science" - but the phrase is unusual, very rarely used.

Instead, more commonly seen are 情報工学 ("information engineering") or 情報科学 ("information science") - which is equivalent in meaning to "informatics".


There are at least four more-or-less equivalent Korean terms in common use. "컴퓨터 과학" (lit. computer science) and "컴퓨터 공학" (lit. computer engineering) is the most popular and frequently seen in universities. "전산학" (lit. [electronic] computing science) is less common but preferred by several prominent universities [1]. "정보과학" (lit. information science) is substantially unpopular than both but can be seen for example in several academic institutes like 한국정보과학회 (the Korean Institute of Information Scientists and Engineers).

[1] There was even a significant attempt in 2000 to change the name of KAIST [2] CS department from "전산학" to "컴퓨터 과학" or similar. The attempt was unsuccessful and to this day its name remains "전산학(부)". Prof. Kwanggeun Yi has written a public letter [3] against the change.

[2] https://www.kaist.ac.kr/en/

[3] http://ropas.snu.ac.kr/~kwang/memo/name.html


In Swedish it is called "datavetenskap" i.e. "data science".


... and nevertheless it takes a lot of time to realize that it is not about computers.

But I agree, the European phrases are more honest to the content.


These words overlap with IT too much. I’m suspicious that they refer to Computer Science specifically, but please prove me wrong.

I always find it slightly irritating that my learned peers from the Information Technology team — they who rigorously study the practice of managing Jira installations, Windows 10 upgrades, finite Active Directory domains, and the long term effects of CISCO certifications — have land grabbed the English word Information.


In Romania the top higher education institution in CS is called "Automatics" emphasizing the applications.


In my country that would be related to "automatismos". Thus, industrial automatization, control engineering, automatic theorem demonstrations and so on. Self regulated machines.


"Computer science was a fraud. It always had been. It was the only branch of science ever named after a gadget. He and his colleagues were basically no better than gizmo freaks. Now physics, that was true science. Nobody ever called physics “lever science” or “billiard ball science.“

The fatal error in computer science was that it modeled complex systems without truly understanding them. Computers simulated complexity. You might know more or less what was likely to happen. But the causes remained unclear."

- Bruce Sterling, Zenith Angle


> The fatal error in computer science was that it modeled complex systems without truly understanding them. Computers simulated complexity

I think this part is backwards, and I would even say that CS is about complexity itself. In some ways it is even meta-mathematical even though it is a subset of mathematics.

There is an interesting paper measuring the complexity of different things in terms of a minimal Turing machine (sorry, I’m not sure about the details but will try to find it) and it gave a relatively small number for the complexity of the base axiom set of modern maths (few kb, or maybe MB?) It really put it in perspective for me what is mathematically provable and all the rest of things that we don’t even have the tools to reason about, at most we can compute it.



Naming it after a profession doesn't mitigate the critique from Sterling's quote. Mathematics isn't named calculator science.

https://www.youtube.com/watch?v=7EA-eVWEIKY


https://en.wikipedia.org/wiki/Actuarial_science

Also, people arguably use the scientific method when programming. I think that Sedgewick illustrates this point well in Algorithms: https://algs4.cs.princeton.edu/14analysis/


Interesting enough, being a Math theacher doesnt imply being a good calculator.

I saw this happen with my Math Teacher Hero and with common teachers.


Computer science is the study of computing things. In fact, virtually all of math used to be about the study of computing things, until about 200-300 years ago, when the study of mathematical structure branched of the science of computing things.

Computer science has existed for thousands of years, the naming has just been a bit off.


Dijkstra hand-wrote a paper titled "On the cruelty of really teaching computer science" where he argues that we should think about CS more as pure reasoning like mathematics than something tied to a machine.

https://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF


Whenever this comes up I feel compelled to say that Dijkstra has a huge blind spot with this, that perhaps is more obvious with decades of hindsight. He advocates formal proof for a functional specification of a program to be part and parcel of writing it, and note that what he refers to by formal proof is ambiguous.

If he means the informal proofs mathematicians write and publish all day long, these actually include a lot of handwaving, metaphors, generalizations, and leaps of logic that the reader is presumed to be expert enough to fill-in-the-blanks. So for example, Wiles had a non-trivial bug in his proof of Fermat's last theorem that was thankfully non-fatal and fixable.

On the other hand, if Dijkstra was referring to formal proof, then without a proof assistant one still easily makes mistakes in it, and even with a proof assistant the task is so immensely tedious even now that even mathematicians don't do it, so why would programmers of non-safety-critical apps?[1] And another blind spot is that no formal proof will help you if your specification is wrong, and only give you false confidence: you can't navigate the world without errors if the only geometry you know is Euclidean.

[1]: of course, note that many people are trying to develop good enough proof assistants that mathematicians would feel add more value than remove via tedium. And also note that formal methods are being employed when the stakes are high enough; e.g. formal verification is performed on processor circuits.


One of my Uni professors was working on Proof-carrying code *(https://en.wikipedia.org/wiki/Proof-carrying_code). I got a cursory involvement. Although I agree with you, the fact that there is a way forward and is entirely based in mathematics (on the formal side) makes me also agreeable with OP. I don't see a contradiction. If you talk about the more informal side- the way something is used does not necessarily define its nature.


Indeed, I don't disagree with OP. I just felt compelled to point major problems with Dijkstra's argument, because it's somewhat generally known yet its very real shortcomings aren't as widely known.

The general response seems to be that "yeah, ideally we should be more mathematically rigorous with programs if we have time and mathematical expertise", but few have sufficient exposure to formal methods to understand that it's far from the panacea the memo makes it out to be and there are more reasons not to do it than just time and expertise.


There is a way forward, but it is tedious as all hell. I love formal methods. I spent years in grad school working in it. But the truth is that formal reasoning struggles to scale to interesting programs, especially if you have to consider open programs, and is utterly incoherent for most software engineers. You can compare something like symbolic execution, which seems amazingly elegant and powerful, with coverage guided fuzzing, which is hacky and random. Coverage guided fuzzing eats symbolic execution for lunch.


There's a lot more in that paper than an argument for formal proofs. In fact, formal proofs are more of an incidental idea than the main point of Dijkstra's argument.

For example, he suggests that we need to understand computer science as a "radical novelty" and stop applying inapt analogies that come from thinking about this like a gradual evolution of mechanical things.

Inapt mechanical metaphors include software "tools" and "workbenches" that require "maintenance". Getting stuck with bad industrial analogies is "medieval thinking" that prevents real understanding.


Yes, he advocates that we need to view it as a "radical novelty" and critiques existing methods, but the only better method he puts forth is with formal proofs; it's not incidental, it's the reason for calling existing methods backward and insufficient for managing programs.

> it gives us a clear indication where to locate computing science on the world map of intellectual disciplines: in the direction of formal mathematics and applied logic, but ultimately far beyond where those are now, for computing science is interested in effective use of formal methods and on a much, much larger scale than we have witnessed so far.

I'm not too interested in arguing point-by-point the other arguments for viewing them as a "radical novelty", but even mathematics deal in methods that require maintenance (e.g. calculus -> analysis), and half of it is coming up with the correct definitions (metaphors). Is it all that medieval? (Ironically, the medievalist recognizes that logic made great progress in that era, then took a break in the Renaissance until Frege and friends)


One of my favorite professors started the year by telling us that the only equipment we need to work on computer science problems is a pencil and some paper.


CS is a way of thinking about how to solve things, with regards to efficiency of solution.

Example: my wife likes to put gym shorts and shirts in different drawers. To my CS mind that doubles the seek time of a retrieval.

The little bowl by the door is a cache of my most recently used stuff.

People who nearly file their papers (eg bills) nearly are optimizing for retrieval efficiency - of an operation that is actually very rare.

When my wife and I leave the apartment we often take the garbage out to the chute. My wife likes to drop off the garbage before pressing the button to to call the elevator. To me that's weird because calling the elevator is long running IO in a separate thread - might as well start it asap.


> To me that's weird because calling the elevator is long running IO in a separate thread - might as well start it asap.

Oh, man. I've felt exactly the same way wrt. a lot of IRL scenarios, implicitly optimizing the number of "threads" I can do tasks in, for instance:

- starting an automated but lengthy task (e.g. choosing Nixpkgs PRs to automatically review) before going out for a period of time

- starting the microwave heating food before going to the toilet

- pressing the elevator button before tying my shoes (in a private elevator scenario)

Distinct from multitasking, which splits your attention, here, you can still dedicate attention to a task at hand while knowing in the background that a thread is running. These types of behaviors may not really do all that much long term, but it sure feels nice you optimize IRL scenarios.


> People who nearly file their papers (eg bills) nearly are optimizing for retrieval efficiency - of an operation that is actually very rare.

It maybe rare but that fact doesn't capture the probability that the importance of retrieval could be disproportionately high - when you really need that bill, you definitely want it and want it quick.


> when you really need that bill, you definitely want it and want it quick.

Seems very unlikely & it's a very improbable event and if it does happen, say you have "all your bills for the last 3 years" jumbled up together it'll take a minute or two to find it anyway. Vs filing each one carefully..


Learning programming almost feels like a curse. I guess I was naturally predisposed to something like programming as I always remember liking efficiency and making things more efficient.

However, after getting a CS degree and building things at work....I get upset a lot. I see so many things that are unoptimized and get angry when I have to wait because the process is bad.

I refuse to do things without proper tools, as I don't want to fiddle with something for hours while a proper tool can achieve a desired result in minutes.

Now, that I think about this it has been affecting me greatly.


My favourite take on "computer science" is from George Forsythe who founded Stanford's computer science department in the 1960s. He didn't seem to be embarrassed to put the computer into computer science.

Excerpt from http://i.stanford.edu/pub/cstr/reports/cs/tr/65/26/CS-TR-65-...:

> I consider computer science to be the art and science of exploiting automatic digital computers, and of creating the technology necessary to understand their use. It deals with such related problems as the design of better machines using known components, the design and implementation of adequate software systems for communication between man and machine, and the design and analysis of methods of representing information by abstract symbols and of processes for manipulating these symbols. Computer science must also concern itself with such theoretical subjects supporting this technology as information theory, the logic of the finitely constructable, numerical mathematical analysis, and the psychology of problem solving. Naturally, these theoretical subjects are shared by computer science with such disciplines as philosophy, mathematics, and psychology.


The very first thing that Hal Abelson says and puts up on the chalkboard in his very first SICP talk (Lecture 1A 6.001 'Structure and Interpretation') in 1986 is that "Computer Science is not about either science, or computers'. https://www.youtube.com/watch?v=2Op3QLzMgSY


> computer science is not about computers, any more than astronomy is about telescopes, or biology about microscopes

That is actually underselling telescopes and microscopes. It was telescopes that really gave us modern astronomy. Before we had the ability to really observe stars and planets, we were stuck with a very simplistic, geocentric view of the universe. The telescope was what really opened up venues for us to really understand astronomy.

Similarly, before the invention of the microscope, we had a very limited understanding of biology. There was no germ theory of disease, instead just theories about 4 humors. It was the microscope that really opened up venues for us to really understand biology. In fact, we even have a branch of the science that is basically dedicated to the biology of stuff you see under a microscope - microbiology.

With astronomy and biology, the science, such as it was, preceded the invention of the tools that were really needed to study it. With computer science, people were not capable of doing calculations fast enough to really appreciate complexity theory and asymptotes. At low N, N^2 and 2^N can look similar (4^2 == 2^4). The computer both became the application for computer science, as well as revealed the need for this area of study.

One can almost imagine an analogy, where the stars are invisible to the naked eye. Someone invents a telescope, and all of a sudden discovers the full wonders of stars. There is a pretty good chance that astronomy in that world might be called something like "telescope science" since the telescope is so intrinsically linked both to the birth of the area of study as well as its application.


"the difference between mathematicians and computer scientists is that mathematicians love math and computer scientists hate computers"

https://mobile.twitter.com/sydgibs/status/138740846208320307...

(not true in general but still insightful)


pg has a good essay partially around this topic called "Hackers and Painters." That essay also lends its name to his book of essays. To quote

> I've never liked the term "computer science." The main reason I don't like it is that there's no such thing. Computer science is a grab bag of tenuously related areas thrown together by an accident of history, like Yugoslavia. At one end you have people who are really mathematicians, but call what they're doing computer science so they can get DARPA grants. In the middle you have people working on something like the natural history of computers-- studying the behavior of algorithms for routing data through networks, for example. And then at the other extreme you have the hackers, who are trying to write interesting software, and for whom computers are just a medium of expression, as concrete is for architects or paint for painters. It's as if mathematicians, physicists, and architects all had to be in the same department.

http://www.paulgraham.com/hp.html


The place I heard the quote is the opening statement by Hal Abelson in the course he co-taught with Gerald Sussman and is now part of MIT Open courseware. Not quite sure when it was taught, but the text SICP was written in 1985.... He goes on to say also the word Geometry derived from metron to measure and Gaia, the earth, and that Geometry was developed in Egypt to restore boundaries to land after the annual flooding of the Nile. But it's really the beginning of our efforts to formulate our concepts of space and time.


I hate this quotation and find it not only false but sad. It feels as if a physicist would say "physics is not about mass, energy and matter, it is about ODE and PDE."

And yes, astronomy is pretty much about looking at the shadows of sticks under the sun; and this includes more complex "sticks" like telescopes.


I also disagree with the quote. I think it's fair to say that the goal of Computer Science is to generally improve computing, but the purpose of astronomy is not to generally improve telescopes, nor biology microscopes, etc.


I've always interpreted the statement to mean "theoretical CS is the only real CS: everything else is just glorified software engineering."


A lot of people think it is. Computers are a deep well and you could easily subdivide it quite a bit.

1. Math-heavy "CS" that studies algorithms.

2. The study of teams and best practices, maybe "Computer Sociology"

3. The study of tech team and company efficiency, maybe under psychology.

4. Computer Engineering, the study of how to engineer computers

5. The hypothetical science behind that engineering, the science not of algorithms, but of structure and design of computers themselves


This is what CS should be. But academic CS is full of people who really wanted to be in the math department but aren't quite bright enough to make the cut.

So they cargo cult CS into a sort of weird pure/applied-ish math hybrid full of contingent generalisations like Big O and debatable abstraction traditions - not least the idea of provability, which only applies to conceptually self-contained micro-problems and is a much harder sell for big complex systems.

The engineering track - including the theory of how to design systems so they actually work, are easy to use, and are maintainable - is underrepresented.


The beginning of this lecture (30 seconds in) by Harold Abelson (author of SICP - Structure and Interpretation of Computer Programs) explains why Computer Science is a terrible name here.

https://www.youtube.com/watch?v=-J_xL4IGhJA&list=PLE18841CAB...

Not sure about the year.


That lecture (from 2005) is how I first came across this idea of computer science not being about computers. It was interesting (though not massively surprising) to discover that similar arguments have been made for several decades earlier.


That video lecture is from 1986.


Thanks, I stand corrected. Part of me did think 2005 would have been a bit too recent.


Seeing as the Danish "CS" degree deliberately[1] wasn't named "Computer Science", but "Datalogi"[2] (which translates rather directly to "Datalogy"), and since it happened in the sixties, I somehow assumed this article would end up mentioning Peter Naur in some way.

[1]: http://www.naur.com/comp/c4-3.html [2]: http://www.naur.com/comp/c4-4.html


We also now have an education the is more equivalent to a US computer science degree: Software Engineer.

I think the US degrees are more focused on practicality than the Danish version. Basically the Danish universities will teach basic programming in one or two languages and the expect you to be smart enough to figure out the rest.

It’s not that one education is better than the other, but given that I didn’t end up doing reasearch of heavy computational work, I might have preferred a US education.


I'd expect "engineer" (ingeniør) to be a protected title in Denmark just like Sweden, which requires a math heavy university programme?

So the way to call yourself a software engineer without a math-education is to move to a country like US, making even the "Software Engineer"-title problematic.


It is a protected title, and yes, it math heavy, at least the first couple of years.


And calculus has nothing to do with pebbles or counting. So what?

Also, isn’t it easiest to think of a computer as an abstract concept that could both represent a physical device and the abstract computer? Computation needs a computer, whether real or abstract.

Lastly, I think science is the more “wrong” word in the name.


> And calculus has nothing to do with pebbles or counting. So what?

Coincidentally, in my native tongue, we regularly don't use "calculus" as a term for mathematical analysis any more than we use "computer science" for informatics.


As per Paul Halmos one can’t really write a good calculus book as taught in US schools, since there is no single subject corresponding to calculus. One needs to study series, (and other subjects I can’t really list now).

I’m not sure which of his books I read it in..


I was thoroughly confused when I found out how the US high school math curriculum is structured. To this day I can't remember what the hell is "precalculus". In my country I had separate textbooks on: functions; equations and inequalities; sequences and series; planimetry; stereometry; goniometry; analytical geometry; combinatorics, statistics and probability; complex numbers; mathematical analysis. Every time I refresh my knowledge on what "precalculus" is I promptly forget it again because it doesn't correspond to any of the fields that we were taught and that I'm used to think in terms of.


This quote, regardless of its origin, has been used many times in the past in similar forms, and in different contexts, and is quite useful. Another example: geometry is not about compasses and straight edges anymore than computer science is about computers.


This is straight from the school of Peter Naur.

He called it datalogy, the science of the nature and the use of data.

See Peter Naur: “The Science of Datalogy”, Communications of the ACM, July 1966.

https://dl.acm.org/doi/10.1145/365719.366510

Incidentally, he became the first professor of datalogy in Denmark at the University of Copenhagen, founding DIKU, the Institute of Datalogy.


Datalogy seems like a bad name especially now in the era of "Big Data" & "Data Scientists"..

Maybe Computology would be a better name..


Dijkstra, to whom that quote is often attributed, preferred to call it "Computing Science" as it is the science of computing.


Now that you've said it, I realized that's how we say it in portuguese (computing science not computer science).


Would you please provide the native Portuguese words? Like others, I'm interested to know what they are.


Ciência da computação = "science of+the computing" = computing science


We settled this debate at our department by calling ourselves the computing science department. This makes it clear that we are at the interface between the mathematics that algorithmically computes various things, but that these computations need to eventually end up on a machine that can be engineered today or in the future - a computer.


I agree with the sentiment, but I've actually come to consider "computer science" to be a great name for our field.

Turing's universal machine is the original dependency inversion of our field: instead of specifically studying the programs that can be written for any particular hardware device, we largely study phenomena that are regarded as computation as defined by the Church-Turing thesis, and require that the hardware vendors supply suitable universal machines which can instantiate the phenomena of our study. Or field is the science of computers -- every program is a blueprint for a computational device -- but we choose to simulate most of our blueprints using universal machines, so that we don't have to send each one off to the silicon fab separately.


In Brazilian Portuguese, CompSci courses are named the equivalent of "Computation Science".

This is one of the few cases in which I think the Brazilian Portuguese translation is way better. It actually conveys a more precise idea of what the discipline is about.


Not about computers but the science of what is possible to do using a computer and thus understandably referred to as "computer science". It rolls off the tongue easier than "computing science"


I find this article to be inspiring.

I’m envious of those who have had the opportunity to study computer science and earn a recognised degree for that investment.

I’m self educated and continuously study computer science. It is the one specific subject that I long to study full-time around similar thinking individuals.

I’m fortunate to have had a successful career as a software engineer, but that’s just not enough for me. I aspire to apply my mind for reasons other then a salary.


I don't think you need to be envious. The degree isn't important as the learning and doing science is rewarding in and of itself, even on your own. You're already applying your mind.


I mostly agree with this. It's weird that you get an engineering degree called "science" which is mostly set up to prepare you for entering the commercial software industry. Universities can't even figure out which college to put the CS department under. There needs to be some sort of split where one track can focus on the academia/research and other on practical applied concepts.


Tangential, but QuoteInvestigator is a great website, and quite entertaining. Many famous quotes are not from the people they're usually attributed to.

Examples:

"I Have Never Killed Any One, But I Have Read Some Obituary Notices with Great Satisfaction" (Darrow, not Twain, https://quoteinvestigator.com/2011/05/05/darrow-obituary/)

"When the Facts Change, I Change My Mind. What Do You Do, Sir?" (maybe not Keynes, https://quoteinvestigator.com/2011/07/22/keynes-change-mind/)

"A Lie Can Travel Halfway Around the World While the Truth Is Putting On Its Shoes" (neither Twain nor Churchill, https://quoteinvestigator.com/2014/07/13/truth/)

"Everybody is a Genius. But If You Judge a Fish by Its Ability to Climb a Tree, It Will Live Its Whole Life Believing that It is Stupid" (not Einstein... https://quoteinvestigator.com/2013/04/06/fish-climb/)

"I Disapprove of What You Say, But I Will Defend to the Death Your Right to Say It" (not Voltaire, https://quoteinvestigator.com/2015/06/01/defend-say/)


I used to say Computer Science is not about computing, nor is it about science. But Nand2Tetris changed that slightly.

When you realise the fundamentals are loops, data, comparisons, addition, negative numbers and instructions of those then computer science is building logical structures based on said computational primitives.


In Germany the equivalent field and degree is called “Informatics” (Informatik). I always felt weird about telling Americans I studied “computer science” because it sounds so dumbed down. It’s like how a 6 year old would describe it to someone else: “my older brother studies the science with the computers!”


Even though I appreciate the origins of "informatics" I prefer "computer science" nowadays because the word is less loaded in Germany. I feel like in the US there is more appreciation for CS. The word "informatics" has already been taken over and devalued by clueless business people like "IT" before it.


Maybe "computing science" would be more accurate


Wow, what a MASSIVE missed opportunity to quote the best variation of this idea by far, done by Hal Abelson in the legendary "6001-Introduction to Computer Science" class on MIT in the 80s.

You can check it here: https://www.youtube.com/watch?v=2Op3QLzMgSY

Abelson in the first minute crosses both computer AND science, and references the also legendary SICP with "computer so-called science actually has a lot in common with magic".

Honestly, this alone already made the article that empty.


You can flip this around though. If the math and theory wasn't relevant to real world computers, we wouldn't consider it part of computer science. It's just be some abstract mathematics.


Yes, but the definition of "computer" in this context is much more expansive than the hunks of silicon we're using to converse right now. The Turing machine was meant to capture the basic set of actions any human could in principle perform, made amenable for formal inquiry and algorithmic specification.

In particular, the computer programs we typically develop are designed to automate some process that a human would have otherwise done. We can study those processes -- and spaces of such processes -- independently of the executive agent that ultimately performs those processes.


Computer Science is to software development like theoretical mathematics is to economics.

Sure, there are some formulas in economics, but 99.99% of the time you are not going to prove anything mathematically and you are just looking up a formula written by somebody else. On the other hand economics has a huge amount of stuff that is not covered by mathematics at all.

It is disheartening to see so many people wasting so much of their potential by studying CS when they know they will get into software development anyway. Rather than racking education costs they could be earning money and gaining experience.

I have been constantly hiring for the past 15 years and I have learned to stop caring about CS. Real world experience is worth more than comparable time spent studying CS.

The only reason I may prefer a candidate with a degree is because it takes a lot of work and perseverance to stick to the goal of earning the degree, but I almost don't care what kind of technical degree you have earned.

One more point, it is not like academia is the only place where you can learn CS! In some areas of knowledge and at bleeding edge of it you may need access to people, but basic CS knowledge that can ever be even potentially useful at development work is all well documented in a huge selection of very good books.

You can pick them up and learn.

Good developers treat learning as part of their work, and it doesn't matter if something was not taught at school -- if they notice they are missing some knowledge they will just learn it.

If you need to study something I think it is better to study something orthogonal to what you are going to be doing. This can help create unique profile for you as a developer.

For example I have studied theoretical math and over the years I came to conclusion that was much better choice than going to CS. I have learned logical thinking and dealing with complex abstract problems. I have learned most of the CS stuff anyway while doing my work but I would probably never learn most of what I learned in math.

Other good directions of studies would be philosophy, visual arts, management, accounting -- you see the picture. Any of these could provide you with an edge as a developer for particular set of problems.


I mean if you're only hiring people to do basic web dev (front or back end) then yeah sure you don't need a CS degree, but for anything complex having a real education in CS/Math/Engineering is definitely required. Someone building a database engine or compiler isn't doing the same thing as someone building a wordpress website or someone who integrates apps into salesforce. The latter don't really require a CS education but the former definitely do. Over time I imagine the split will be made more formal and we will see the "developer" job category split up more granularly.


If you look at projects, development overwhelmingly consists of websites, mobile applications and backend systems. And backend systems overwhelmingly REST APIs or some message consumers that just translate the call to couple database calls to translate the response back to the client.

Even if you go somewhere like Google you will find most of their systems are just REST APIs as described above.

Now, there obviously is a lot of interesting projects for you compiler or OS lovers. But there is so much choice you don't have to be ready to work on them. I mean, I don't need to learn robotics just because 0.2% (entirely made up number) of projects on job market are about writing software to control robots.

So, to sum up:

-- if you are in it for money, don't waste time on studying CS, just learn basic programming and hop on any project.

-- learn on your employers time. How fun it is being paid and learning?

-- you don't need to get every job. You only need to get one (every three years...)

-- most projects are boring from the point of view of programming techniques you are going to be using. Learn to find fun somewhere else.

-- if you want fun projects you can always learn what you need on your own (on your or your employers time). You aren't going to be good developer if you don't spend considerable amount of time learning for the rest of your life, anyway. Just get used to spending time learning new stuff every day.


This also reflects how broken hiring is in tech. Leetcode and algorithm memorisation has very little to do with being a "good" developer. Can you communicate your ideas well? Can you understand and follow up when requirements are not clear? Are you able to work with others well? This last point in particular is where I find a lot of candidates fall down. They maybe able to invert a binary tree, but if they have to have an actual conversation with say an end user to help resolve/understand a problem, many of them can't.


That is exactly what I am seeing. I think of things like ability to communicate, lead, etc. as force multipliers.

It does not matter much how good your technical skills are if other force multipliers are very low.

More than that, people who are bad at communicating and relations tend to stay away from these problems -- trying to bring every problem to technical level -- basically ensuring they are continuing to be at a disadvantage.


While I would agree that a Computer Science degree doesn't necessarily teach you a whole lot about Software Engineering (i.e. little-to-no time spent on version control, continuous integration, unit testing, microservice architecture, deployment, requirements gathering, data security, etc), it did give me a broad exposure to the different types of work that could be out there for programming, which would be a lot harder to do in the job market (i.e. no one is going to look at my web/game development history and go 'Yeah, this guy would be a good person to hire to work on our operating system!')

I learned about making Operating Systems, Compilers, 3D Graphics, Artificial Intelligence, Machine Learning, TCP/IP Networking, Databases, low-level Assembly programming, and how computers work at a hardware logic gate level, to name a few. And they offered other courses as well, these are just what I picked from the offerings.

Yeah, there was the standard Data Structures and Algorithms class (which in the business world is now not much more than a 'you better know/refresh this if you want to get through our interviews' class) and one class that was about Automata theory, which my teacher would claim 'is the only real computer science class at this school' which was all mathematical proofs, but it still helps you understand regular expressions better, at least it did for me.

But most of the classes I took would have had practical applications in the workforce, for different types of jobs. But really isn't that pretty much any undergraduate major? The classes you take aren't all going to be directly relevant to your particular path through the field, it's more about exposure to different possibilities and providing a broad base of knowledge to build from.


What should it be called then? I saw suggestions for computing science, but applications often don’t ‘compute’ anything rather make API calls. Data science doesn’t seem to fit.

The best I can come up with is ‘instruction science’ the study of how to structure, execute and store sequences of instructions. The computer is a tool to do it faster, but you can also use pen and paper, it would just take longer.


I kind of like the Finnish term "tietojenkäsittelytiede" ("data processing science") and its older form "tietojenkäsittelyoppi" ("data processing theory").

It combines the focus on the process in computer science / computing with the focus on the data in datalogy / informatics. It tells that the process is what we are really interested in. At the same time, it admits that the data and the results are what ultimately matters and that computation is just an irrelevant side effect we would like to avoid.


Here is Harold Abelson discussing the topic in ~1986. His introduction to “Computer Science” has stuck with me for years. Interesting to see the lineage of the statement, it must have stuck with him as it has for me, when he first heard it.

https://youtu.be/-J_xL4IGhJA?list=PLE18841CABEA24090


The quote might be accurate, but most of the people taking computer science courses care very much about computers, and particularly actual computers as they physically exist.

It'd be nice if there was a better division between the blackboard purists and the pointer-slingers, but in most cases they have been lumped together, and it produces a subpar education for both breeds.


As someone who hasn't taken CS classes, I was surprised to learn that what I'd assumed was CS is classified as Electrical Engineering (Designing processors, ICs etc), and Physics (concepts like reversible computing, quantum computing, the works of Turing, Feynman, and Shannon.) I still don't have a grasp on what characterizes CS.


Why would that be surprising? Processors and ICs contain a lot of analog electronics.


The works of Turing are absolutely not physics.


At the University of Edinburgh, there isn't a School of Computer Science, but a School of Informatics - emphasis on the information processing in human and machines.

In contrast, many schools nowadays are moving away from computing concepts towards teaching students how to use Microsoft Office, which troubles me greatly.


In my opinion, this thread illustrates insecurity over naming that people within this profession exhibit regularly.

The analogies that this argument hinges on are often brought up in a dogmatic manner with historic terms jeered in euphoric tone.

In threads like these devil's advocates, normally copious, are scant. Why?


Because "pseudoscience" is a pejorative.

There is an uncanny valley between non-science and science. Realistically, everything we know about a lot of things exists in that uncanny valley. Most or all of psychology & economics. A lot of of zoology, ecology, health sciences etc. Yet, it's an uncomfortable place.

Different professions have dealt with it differently over time, usually trying to reach one bank or the other.

The bad name pseudoscience has isn't unearnearned. Our modern conception of "scientific" (empirical falsification, etc.) was kinda invented to debunk turn of the century psychology and economics... and those often did earn pseudoscience's reputation.

The philosophical implications of "hard" sciences really are different, and the disadvantages of semi-scientific pursuits do really manifest.

Say we study mental health impacts of exercise on teenagers. You may find a result in one school. It might be replicable in a nearby school. It probably won't hold true in a different country or a decade. That's because we aren't really isolating fundamentals, as scientists should. OTOH, we can't isolate fundamentals in a lab and still have results that are relevant to actual mental health IRL. Does that mean the whole pursuit is pseudoscience?

Computer science is in the same position. You can strive for an F=ma, ideal understanding of fundamentals... but... there's not always a lot of gas in that car.


English is not my primary language so correct me if I’m wrong. But didn’t the word "computer" have another meaning before the physical computer machine? Otherwise I would agree it should be renamed to information science.


It referred to person who does computations by hand and/or abacus.


A “computer” is something that “computes”, morphologically.

But even so, it would better be called “computation science”.

It is called “informatica” in Dutch, in any case.


So what the hell is CS about ?


Handling and processing of informations. This includes computing and algorithms.


It's about computing, not about computers.


Algorithms and data structures.


Machine learning


Algorithms?


Is astronomy the science of telescopes?


Boolean algebra


Using computational tools to solve problems.


Richard Feynman's take on the term Computer Science: https://www.youtube.com/watch?v=lL4wg6ZAFIM


Computer science is to math as biology is to physics. Biology isn't technically about life, but the reason we have separated it and study it so thoroughly is that it is related to life.


“Computer Science” should be called “Computation”. I believe that covers everything we care about in this field. From most abstract math (Type Theory) to least abstract practical theory.


this is a conspiracy to move us away from hands-on interactive realtime seat of the pants programming to some respectable top-down, waterfall model or pencil and paper math shit.


As I see it, the term "Computer Science" includes all sciences that make use of 'computational power'. That includes most accurate sciences nowadays.


The analogies given are kind of dumb, because astronomers don’t create their own telescopes. Computer scientists create computers and then everyone uses them to do stuff


I made an illustration from this quote: https://comic.browserling.com/93


If my memory from many years ago serves, I believe the introduction in SICP says “computer science is not a science and has little to do with computers.”


  Computer Science ~ physicists
  Computer Engineering ~ electrical engineers
  Computer Technician ~ electricians
But CS is used for all three.


Back before computer systems, computers were people who did calculations. It’s maths at its core.

Computers today are surely then just applied computer science.


Computer programming is to computer science like accounting is to mathematics.

Odd thing is I don't see a bunch of bookkeepers brandishing anything.


Typical US-centric problems.

Anywhere else CS is called „Informatics“, meaning the science to do with information.

Broad enough to make articles like this unnecessary.


Eh, not really. At least in the nordic countries I haven't really seen many other translations than "computer science" with eg. uni degrees


Computer didn’t always refer to a device

Someone that does abstract computation is a Computer. It was a profession

No need to get confused on how we got here


always a shock to a fresh undergrad who discovers what they really meant to study was computer engineering.


My favorite description is: Creative problem solving with computers for almost any application area.


Note well: the article is about the provenance of the quote, not the veracity or not of the sentence.


Computing Science? Algorithmology?


Computer science is the mathematics behind counting.

You count the number of "swap" operations in insertion sort, quicksort, or merge sort. You count the number of "memory" operations. You count the number of bytes used.

When precise counts are difficult, you learn big-O notation to estimate how counts change as variable grow. Etc. etc. etc.


I would consider what you described to be "algorithms" (or "complexity theory"), a particular sub-area of computer science. There are quite a few other areas of CS.

The ACM organizes a number of SIGs (Special Interest Groups), each with their own (often several) conferences [1]. Some of the more well-known SIGs include SIGPLAN (Programming Languages), SIGGRAPH (Computer Graphics), and SIGLOG (Logic and Computation). What you described probably falls best under SIGACT (Algorithms and Computation Theory).

> the mathematics behind counting.

Traditionally, this is combinatorics, not any particular part of computer science. Complexity theory concerns itself with specifically counting the amount of resources used by a formal process.

[1] https://www.acm.org/special-interest-groups/alphabetical-lis...


Computer science is much more like Mathematics than Physics. It feels wrong calling "computer science" a science, in the same sense of calling "Mathematics" science.


In the Netherlands the broad field of CS is called Information Technology.


"Data science is not about data" - I can see a pattern here


Computer Science = The Study of Information Processing


A better name would be "Computing Science".


Maybe CS is a branch of information theory…


computer science / information technology

they got them mixed together, it should be

computer technology / information science


tldr: There is no other science which is based on the study of a tool. A computer is a tool and so cannot be the main subject of a sciencific field. Hence Computer Science is a bad name.


It's technically not even a science. It's in the realm of logic and maths. After all we don't call algebra a science, why does computing all of sudden need the word?

So to some it all up: computer science is neither about computers nor is it a science.


It's true that for some time now "science" has been used almost exclusively for disciplines that make use of experimental methodologies, but in earlier usage it just meant roughly "the process of creating knowledge." The German "Wissenschaft" (which is normally how "science" is translated) is closer to the older meaning (though I'm not fluent in German, so I'm just going off what I've been told).


But the inconsistency still exists regardless of the historical etymology.

Math is not a science. A mathematician is Not a scientist. Why is computing a science?


Well I'm not sure when the common usage of "science" changed, but it's possible that when "computer science" was coined in the 1950s [1], the older usage was still at least widely understood. Perhaps given the present day importance of the discipline that's stuck with an outdated name, we can return the the older usage which is IMO better.

[1] https://books.google.com/ngrams/graph?content=computer+scien...


I'm curious how did mathematics miss out on the word "science." The meaning of the word "scientist" is consistent with how it's not ascribed to a mathematician as mathematicians don't do anything related to the scientific method.

The thing is you stated that around this time the term "science" was more broad and just meant acquiring knowledge... how come "science" wasn't applied to mathematicians? Technically, according to what you stated, the definition was broad enough to apply to mathematicians.


Logic, Ethics, and Aesthetics are the three normative sciences. Mathematics falls comfortably within the domain of logic, and thus mathematics is a scientific enterprise. Computing science is the subset of the subset that deals with getting actual computable results.


Did you just group Logic and aesthetics together into one thing?

Beauty and ethics are subjective. Logic is not.

Either way following this definition of "normative science" neither logic nor computer science nor math goes under it: https://en.wikipedia.org/wiki/Normative_science

The reason is because this definition mentions the notion of preferred outcome. Logic and Math and computer science do not deal with "preferred outcomes" these fields are all just axioms and the consequences resulting from said axioms preferred or not.

Pedantry aside, nobody considers a "mathematician" to be a "scientist" when using the terms as they are commonly used in English. This is a total inconsistency.


I’m using Peirce’s definition of the normative sciences[1]. As is not uncommon in English, the same words or phrases can denote different concepts and the wiki link you shared is a case in point.

[1] https://www.isko.org/cyclo/peirce


Feels arbitrary. You don't group oil painting with mechanical engineering why group Math with Beauty?

This isko organization... if they do indeed follow pierce is incredibly strange. Case in point: https://www.isko.org/cyclo/peirce1.jpg

Philosophy is under mathematics which is not under logic? Philosophy is like literature it is entirely a separate category and logic isn't even mentioned in his arbitrary grouping.


This reminds me of Holy Roman Empire, it's neither Holy nor Roman nor it's an Empire.


There's this old picture from a 1986 MIT lecture: https://miro.medium.com/max/4448/1*-yaqXNO1tUwVkUpyOqTT7Q@2x....



Good find. My post is voted down, but this authoritative source is literally saying the same exact thing.


Defining a thing by what it's not. Now that is a science unto itself.


I suppose if we call it computer science, then perhaps all the other forms of science should really be called "reverse engineering" (especially biology.)


Biology is consistent with science in the sense that there exists people called scientists in biology that do experiments utilizing the scientific method.

In math everything is purely theoretical conjecture. No hypothesizes, no testing, no observation, just derivations of theorems from axioms. Same with "Computer Science" it's all logic games.

That's why mathematicians are not known as scientists. For computing, I believe the term "computer science" was likely mistakenly coined by someone who didn't know the full extent of the word "science."


Modern math is more abstract.

Old math, like the one from primary school, was more inspired on physical phenomena and interactions with previous things.

We need more "clever guesses"(lets see what happens if A is B because C) and building up of theories in exercises. But that would not be rigourus...


> Same with "Computer Science" it's all logic games.

Unlike math, it's domain related applications though. What are databases, codecs, regexes or neural nets - abstractions or concrete tools for specific uses? It's not all platonic.


Domain related applications aren't what's studied by "Computer Scientists." You will note that most people who do "domain related" applications call them selves Software developers, Software engineers, etc. etc.

If someone finds themselves calling themselves "Computer scientist" they are indeed usually exclusively studying the logic game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: