Hacker News new | past | comments | ask | show | jobs | submit login
Teach Yourself Logic 2017: A Study Guide [pdf] (logicmatters.net)
257 points by adamnemecek on Aug 28, 2017 | hide | past | favorite | 43 comments



I've been reading through this (https://www.cs.cmu.edu/~fp/courses/15317-f00/handouts/logic....), and have found it way more useful and fun. I expect it's much closer to what the HN crowd would be interested in.


At the risk of getting downvoted, I'm just going to say that I really hate this book. First of all, it moves way, way, way too fast. I glossed over it, and chapter 3 is already the Curry-Howard isomorphism? That took us a semester of λ-calculus to come to grips with. These are big results.

Second of all, it just seems like a mish-mash of like three or four different classes put into one: computability, type theory, first order logic -- all these can be their own concentrations.

Third, it uses type-theoretic terminology ("elimination" and [->e] instead of, e.g., modus ponens [MP]).

Finally, imo that notation really sucks (the top-to-bottom branching is really jarring), I've only seen it used in Type Theory and λ-calculus books.

Some books I suggest: this one from the University of Toronto[1] or this one from one of my favorite classes I took at UCLA[2]. If curious, you can see some problem sets on my website[3].

[1] http://euclid.trentu.ca/math/sb/pcml/pcml-16.pdf

[2] http://www.math.ucla.edu/~dam/135.07w/135notes.pdf

[3] https://dvt.name/logic


I have no dog in this race, but...

> the top-to-bottom branching is really jarring

What do you mean? From what I've seen from skimming the first few chapters, it uses 100% standard natural deduction proof trees. How would you present the proofs?

(The "linear format" is indeed ugly, but it doesn't seem to be meant for human consumption.)

> I've only seen it used in Type Theory and λ-calculus books

That's the point of the Curry-Howard isomorphism: It's all the same thing. It makes sense to use the same notation.


> I've seen from skimming the first few chapters, it uses 100% standard natural deduction proof trees. How would you present the proofs?

You're totally right. I just really don't like proof trees, nor do I think they're clear when first starting out studying logic.


> First of all, it moves way, way, way too fast.

FWIW, from the intro:

> However, if you are hoping for help with very elementary logic (e.g. as typically encountered by philosophers in their first-year courses), then let me say straight away that this Guide isn’t designed for you. The only section that directly pertains to this ‘baby logic’ is §1.5; all the rest is about rather more advanced – and eventually very much more advanced – material.

So, he is pretty clear about the guide skipping the first year of foundational material. The title is certainly misleading, however. Maybe

"Teach Yourself Logic Next Year: After spending a year learning basic stuff somewhere else"

would have been better.


I believe you and the parent are talking about two different resources. You're referring to to _Teach Yourself Logic in 2017_, but I think the parent is referring to the material from CMU.


It's also much more modern. To be perfectly honest, if you are not doing automated theorem proving you probably never need to know about things like first-order logic or classical logic. Alas, we teach first year students boolean algebra instead of the Brouwer-Heyting-Kolmogoroff interpretation and hopelessly confuse everyone with "paradoxes" before anyone even mentions proof theory...


Just a heads up, this pdf is not a logic tutorial or textbook, but an annotated bibliography.

That said, if you got stuck on material implication (practically a topic discussed on the first or second day of logic study), as I did, and eventually were able to understand it, I'd love to know how. (Notice, only if you got stuck first, not if you never had problem understanding it).


You can think of "A => B" as stating "there is a way of transforming proofs of A into proofs of B". If "A" is false, then there are no proofs of "A", and thus there is trivially a way of transforming proofs of "A" of "B" (since you will never be confronted with a proof of "A"). So a false statement implies anything.

If "B" is true, then there is some proof p of "B". Now here is a way of transforming a proof of "A" into a proof of "B": given a proof q of "A", just return that one proof p. So anything implies a true statement.


This is actually the best way to understand it. But without some background (Proof Theory 101, The Deductive Theorem, Metalogic 101), I'm sure this sounds like gibberish :)

NB: Although you muddy the waters a bit. "A => B" is literally "A proves B". And to see why an inconsistent set implies anything at all, we just need to look at the Principle of Explosion[1] -- which is probably less contentious than material implication.

[1] https://en.wikipedia.org/wiki/Principle_of_explosion


Small clarification (I wrote my parent post super late last night). Via the Deduction Theorem, we have:

   ⊢ A -> B => A ⊢ B
That's what I mean by "A proves B"


That’s how I understand it now after a bunch of exposure to type theory and internalising Curry–Howard. But as a programmer my earliest practical way of thinking of it was “How would I use an ‘implies’ operator in programming?” Things like these:

    (a => b)  ==  (a ? b : true)  ==  !a || b

    if (a)
      assert(b);
    // ==
    assert(a => b);

    #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED)
    // ==
    #if defined(_MSC_VER) => defined(_NATIVE_WCHAR_T_DEFINED)

    if (!p || !p->buffer) return 0;
    // ==
    if (p => !p->buffer) return 0;
That is, just talking about truth values, the results of proofs rather than the proofs themselves.


This is interesting because, I'm trying to get to grips with where the abstract and the practical meet. But I'm not sure all of your example fit.

Your first example if(a) assert(b); really does fit the intuitive idea of implication. The other two just look like cases that happen to have the same boolean structure as implication, but that's not their meaning.

For example

   if(!p || !p->buffer)
     return 0;
Means (roughly) "check if p's buffer exists, which of course requires that p also exists". So there is a kind of implication here, but it goes in the direction

   p->buffer => p
Which is not what you are talking about.


The “#if” example seems clear enough to me. We always want to use wchar_t, but if we’re using MSVC, then only if it’s available.

The “buffer” example isn’t great, I agree. I just pulled these from random sources in my projects directory by grepping for “not…or”.


Your "#if" logic sounds perfectly reasonable, but what does it have to do with implication? Except that the boolean test just happens to have the same truth table.


I just think of it as P->Q only says something about Q when P is true; no information is given about Q when P is false, so Q can be whatever it wants in that case. And since any value of Q is acceptable, then the statememt holds regardless of Q's value (when P is false).

We only have the chance to return false when P is true, because thats the only time P says something about Q (that its true)


Not to trivialize your issue, but I always am dismayed at how much time is spent trying to map English statements into those of sentential logic, especially in cases where their usage differ, like inclusive ors or material implication.

In a more mathematical study of logic, you define your logical connectives using truth tables. So the material implication is nothing more and nothing less than

  X | Y | X -> Y
 ----------------
  F | F | T
  F | T | T
  T | F | F
  T | T | T
For the unfamiliar, a truth table works as follows: in the sentence "X -> Y", evaluate the truth values of X and Y (If X and Y are themselves sentences, this follows recursively. If they are atomic predicates, they should have fixed truth values). Then, refer to the appropriate row of the table to find the value of the sentence.

Rather tangentially, my logic teacher would always say "to verify a material implication, we must falsify the hypothesis or verify the conclusion". I'm not sure that helped any of her students come to terms with the connective, but it is a succinct (if not jargon-heavy) way to summarize that table.


There are two ways of looking at logic, roughly speaking:

- Logic as a game (as in a set of rules). E.g., in that regard pure math is a game, formal systems are a game, even programming and CS is a game.

- Logic as an applied discipline used to formalize rational discourse, to arrive at a set of conclusive facts, with sound and valid reasoning, given a problem (which could be a problem in the real world) and a set of assumptions.

Part of my frustration with logicians is that they almost exclusively treat logic as belonging to the first case, and hate it when you ask them about the second case.

I am interested in logic partly because it's an aspect of the foundation of pure mathematics, but around the time I started learning logic, that was the least of my concerns. Instead, I was highly motivated by the need to have a formal system in place which I could rely on while arguing about philosophical issues, and especially topics related to religion vs atheism, because the arguments I would hear from the other side could safely be described as "stupid reasoning".

However, when I started taking logic seriously, I realized that, not only do the fundamental rules of logic sound stupider (hint: material implication) than the logic I was hearing from my opponents, but logicians have no interest in helping the logic student connect the formal ideas to real world problems.

When 99.999% of logicians, no matter how experienced they are, tell me to treat logic as a game, and nothing but a game, what I hear is this: I don't know how to apply logic to philosophical discourse and real world problems, and I am content with my state of knowledge, therefore you should be happy treating logic as just a game too and stop asking any deeper questions, or trying to connect it to philosophy and rational discourse.


Logicians have been extremely useful to fields like formal software methods, linguistics and semantics, programming language design, etc. The latest rumblings are that linear logic may be perfectly applicable to reasoning about blockchains; see https://arxiv.org/abs/1506.01001

Wanting to treat religion vs atheism using tools from logic strikes me as... kind of old-fashioned or eccentric! You seem to have chosen the most contentious, culturally complex, ambiguous topic of all time.

Maybe you'd be amused by this -- https://github.com/FormalTheology/GoedelGod -- an open source effort to formalize Gödel's proof of God's necessary existence with the framework of higher-order modal logic.


First of all, logic is a game. A game in the same sense that algebra is a game. Allow me to give you a very simple analogy.

AC circuit theory and quantum mechanics require the use of complex numbers -- that is, the square root of negative one. In a very real sense, taking the square root of negative one sounds incredibly stupid. In fact, imaginary numbers were a very divisive issue when introduced (not unlike negative numbers before them). What does it really mean to use i? What does i map to in the real world? We're trying to calculate voltages here.. does it make any sense?

Well, probably not. I could start talking about how i sort of extends the number line in the second dimension, and so forth, but that would only move the goal post (what about hypercomplex units j or k or l ... ? ). So that's why the best way to see sqrt(-1) = i is as a rule.

Second of all, material implication is also a rule. There's nothing to connect to the real world. The problem with logic students is that they often don't understand the differences between abductive, deductive, and inductive logic.

You mention atheism vs. religion (I assume you mean theism vs. atheism, as you can presumably have some kind of religion without a deity). When trying to win this kind of debate, you first need to pinpoint what kind of argument you're dealing with. If, for example, you're dealing with an inductive argument, the rules of deduction will not hold -- especially a rule like material implication[1]. I think that your problem is trying to apply formalisms built for one kind of game to another kind of game. Natural language is not logic. Just how Chess is not Checkers.

Going back to material implication for a second, what really made material implication "click" for me was studying the Deduction Theorem[2]. This is, of course, a bit more advanced, but understanding MI from a "provability" perspective really clarifies why it's so obviously true.

[1] http://ac.els-cdn.com/0888613X94900027/1-s2.0-0888613X949000...

[2] https://www3.nd.edu/~cfranks/dt.pdf


Okay, here's a concrete example, if you or anyone reading this is up for thinking about it:

John's wife Jennifer says the following to her friend, "If John left office by 4:55 pm, he'd be home in a minute" (e.g., she claims that if John's office leaving time is 4:55 pm, he'd be home at 4:56 pm).

Now, John never leaves office at 4:55 pm. It is always 5:00 pm or later. If he leaves at 5:00 pm sharp, it takes about 90 minutes on average for him to get home. If there is no traffic, it will still take at least 30 minutes for John to get home. So we know, as a behind-the-scenes knowledge, that if John left at 4:55 pm, there is no way he'd reach home before 5:25 pm.

But, since John has never left office before 5:00 pm, and let's assume he never will, according to the rules of logic, Jennifer's claim is True: It is True that if "John left office by 4:55 pm" then "it'll take 1 minute for him to get home".

Now, I don't read "A => B" as "A implies B". I find it more convincing to read it as "A => B" being True means "Information about A and B is insufficient to rule out implication". If "A => B" is False, that means it is not true that information about A and B is insufficient to rule out implication. In other words, information about A and B is sufficient to rule out implication. And the only way that's possible is when A is True and B is False.

Now, given our problem, if we only look at Jennifer's claim, we would agree with her, because A is False, and B is False. (A: John's office leaving time is 4:55 pm or less, B: John 's travel time is 1 minute). If all we know is that A is False and B is False, we can claim that this is insufficient information to rule out Jennifer's claim.

But, we do have extra information, call it C: that it takes John at least 30 minutes to get home, no matter what.

The question is, how do we incorporate this extra information, C, in our problem, consisting of A and B, if we wish to refute Jennifer's claim of 'A => B' being True?


> But, since John has never left office before 5:00 pm, and let's assume he never will, according to the rules of logic, Jennifer's claim is True: It is True that if "John left office by 4:55 pm" then "it'll take 1 minute for him to get home".

A-ha! What logic are you thinking of? Jennifer is making an inductive -- a probabilistic -- claim. So right off the bat, we throw out our deductive rules. There might be an "if" and there might be a "then", but there is no material implication (=>) here.

There's a weaker form of something we can call "probabilistic implication" which will naturally lend itself to plenty of holes: there's a nonzero chance that John might be abducted by aliens, for example, and he never makes it home. These probabilities are built into Jennifer's original claim. There's nothing to refute.


A: John's office leaving time is 4:55 pm or less.

B: John's travel time is 1 minute.

C: John's travel time is 30 minutes or more.

A is False, B is False, C is True (as given). Hence A => B is True (a ridiculous conclusion).

I don't see any probability or induction in this problem.


Let me give you a simpler example we can work with.

   1: If the sun sets, it will rise.
   2: The sun set.
   Conclusion: The sun will rise.
Even though the above might seem like an application of modus ponens, it's actually not. This argument (like yours with John and Jennifer) is inductive -- probabilistic -- in nature. There's some debate when you truly get down to it (is causality empirical?), but almost all claims about the natural world: be they about John's driving or the Sun rising, are inductive; and as such, deductive rules do not apply. In the case above, the sun can, technically, disappear out of existence. Obviously, this is very unlikely but, in fact, guaranteed it will happen at some point by Poincaré and Boltzmann.

In Jennifer's case, her argument is inductive because there are a few (actually an infinite number of) assumptions she's making: that he has a full tank of gas, that lightning won't strike him, that he doesn't get abducted by aliens, etc.

This differs from a deductive claim, e.g.:

   1: Two sets are equal if and only if they contain the same elements.
   2: A is a set that contains {a, b, c}
   3: B is a set that contains {a, b, c}
   Conclusion: A = B
See the difference?


> The question is, how do we incorporate this extra information, C, in our problem, consisting of A and B, if we wish to refute Jennifer's claim of 'A => B' being True?

If you want an intuitive result (Jennifer's claim is false) why do you start with unintuitive preconditions (John will never leave by 4:55 pm.)? If you allow for the possibility that John might leave earlier, then Jennifer can easily be proven wrong by A being true and B still being false.

If you really only want to consider the case A = false, B = false, then you can somewhat counter "A => B" by noting that "A => not B" is also true. Both statements only give you additional information when A is true, in which case "A => B" is false and "A => not B" is true. If A is never true, neither tells you anything interesting.


I think I might be confusing 'material implication' with 'logical implication' (also known as entailment, or logical consequence).

Because from your last sentence, I thought, if A => B is True and A => not B is True, then A doesn't imply B, or B doesn't (necessarily) follow from A. I think this notion of "B follows from A" is something represented by entailment, not by material implication. (?)

Would it be correct to say that material implication is just a formula (in which case it shouldn't even be called an implication or a conditional, but something like simply a 'material formula'), while entailment is the one that has a real world interpretation? (Also entailment cannot be encoded as a formula but has to be proven on a case-by-case basis?)

edit: But this is again a problem. Because in order to prove entailment we'd invoke a logical proof, which would be a sequence (or a tree or a graph) of logical statements with the chain of reasoning connected by, surprise surprise, material implication, which we have already discarded as just a formula with no convincing logical interpretation! (hence our proof is not convincingly logical!)

edit 2: And that is my main issue with how logicians try to justify material implication. On one hand, they try to convince you that MI is nothing but a formula. On the other hand, they use MI as a connecting glue in mathematical proofs which to me sounds like they're using it as 'entailment'. This feels like a double standard at best.


> I think I might be confusing 'material implication' with 'logical implication' (also known as entailment, or logical consequence).

Both are intended to work essentially identical, but when talking about logic, it might be helpful to make the distinction. Implication as a formula is usually written with →, semantic entailment between formulas with ⊨. That they are essentially identical is because, if AB is entailed unconditionally ( ⊨ AB ), then A entails B ( AB ).

When you apply this distinction to

> if A => B is True and A => not B is True

you have ( ⊨ AB ) and ( ⊨ A → ¬B ). That is not very different from ( AB ) and ( A ⊨ ¬B ), or ( AB ∧ ¬B ), where the consequence is contradictory. Unless logic is broken, you can only arrive at contradictions when you already started with them, so A itself must be contradictory. When you said that A is false, that's exactly what you required: A is contradictory.

Maybe the above doesn't help you much, but it should make clear that your problem isn't just with the difference between implication and entailment.

EDIT: I didn't see your edits. I think I already somewhat responded to edit 1 (implication does have an interpretation, as an encoding of entailment). Regarding edit 2: Implication is justified by mostly working like our intuition would expect. Maybe you should try coming up with alternatives and test how well they work.

MORE EDIT: I think what you're actually confused about is the difference between a formula being true only under certain circumstances vs. always. Implication being true when the premise is false is a bit like a broken clock being sometimes right: if you only focus on that one case, it's not what you expect, but when you take all cases into account, it's the only way it could be.


I think of it in programming language terms.

The syntax 'A => B' is just a mechanical rule, which is macro-expanded into:

if A then B else true

where A and B are both bools, and the 'if..else' is an expression, not a statement. It evaluates to B when A is true, otherwise it returns true.


> as a behind-the-scenes knowledge

You can't reason about information you haven't included in the logic. To say it another way, if a predicate doesn't have a letter representing it, you won't get any conclusion about the predicate.


That's an interesting dichotomy to establish. At first, it appears to mirror the theoretical/applied division we see in a lot of fields. But in addition to an applied mindset, you seem to be demanding something stronger out of logic: applicability to philosophical discourse. To me, this seems like a very lofty goal: I would be very surprised if someone treated religion (to use your example) productively with formal logic.

However, just because it's not immediately applicable in one domain does not make logic useless. A great example of this is Prolog, a logical programming language which uses declarative statements reminiscent of first order logic. It can solve real world problems, though it has not yet seen wide adoption as a programming language. People do use automatic theorem provers, which have their roots in logic. The are used not only to further the "game" of mathematics, but to find also bugs in real hardware and software systems. These application have very little to do with philosophy, and nothing to do with translating between English language and logical sentences. However, I think they are quite interesting.

(I'm not sure what to tell you about logic "sounding stupid". In the end, things like definitions of connectives are just conventions. Although it's good to understand their motivations, I'm not sure how much good objecting to them does. It kind of reminds me of half serious attempts to adopt tau := 2 pi as mathematical notation; maybe using a different constant makes more sense to you, but finally, it won't make a big difference, and you should probably just continue to use pi for interoperability with the rest of the world.)


I'm wondering what would be your take on a concrete example that I just posted here: https://news.ycombinator.com/item?id=15115319


Personally, I don't really like the translation "information about A and B is insufficient to rule out implication". At least when I say "A implies B" I mean exactly "A => B". But maybe I hang out with too many mathematicians.

Other common translations you might be interested in are "A suffices for B" or "B is necessary for A". Though admittedly these particular words have a bit of a mathy tone, so no wonder they are so precisely tied to the definition of implication.

As far as your idea of "outside information" goes, I'm afraid it kind of breaks outside the bounds of sentential logic, which cannot deal with numbers in the manner you are attempting. However, that does not mean we cannot analyze it with logic. It should be simple to put your sentences into the logical "language" of Arithmetic and to prove more or less what you would expect: A is false (because John never leaves the office before 5), B is false (because it takes him 30 minutes to get home), and that "A => B" is true (by the definition of material implication, however vacuously).


Logic is mostly not about the semantics of natural languages, because even very simple sentences depend on a large amount of implicit information. Consider the case of computer vision, and how no one gives a formal definition of objects that should be recognized. Instead, they throw lots of data at a neural network and hope it learns the right thing.

If you really want to apply formal reasoning to philosophical issues, you will need an understanding not only of elementary propositional logic, but also of higher order quantification, formal language semantics and probably modal logics and probability theory.

If that doesn't discourage you, let me try to help you over the bump that is material implication.

Consider the statement "If it's raining, the sky is cloudy." (And ignore any counterexamples that come to mind.) Everyone seems to get stuck on the case where it's not raining, and the statement is true regardless of the presence of clouds.

Now treat the statement as a general rule instead, and make it's time-dependence explicit: "If it's raining on a certain day, the sky is cloudy on that day."

On Monday, it rains and the sky is cloudy. You look at the rule and see that it agrees with reality. The opposite rule "If it's raining on a given day, the sky is not cloudy on that day." is wrong. (Hopefully that's obvious.)

On Tuesday, it doesn't rain and the sky is still cloudy. You look at the rule and realize that it doesn't have anything to say about your situation. That means it can't be wrong, so the rule must be true in this instance. The opposite rule is also right in this instance, because it doesn't say anything either (but it has been wrong before).

On Wednesday, it doesn't rain and the sky is not cloudy. Both rules are also right about this day.

Now, when we state a rule, we usually mean that it always holds. (This is called universal quantification and you will learn about it when you continue studying.) Since the first rule has been correct in every instance so far, it is simply called true. The second rule has been wrong once, so it is disqualified forever, since a rule that's sometimes right and sometimes wrong doesn't give you any certainty.

And what about rules like "If [something impossible], then [something ridiculous]."? Since their conditions are impossible, they never tell you anything. They are the same as saying "". That's useless, but not wrong.

EDIT: If something I wrote is incorrect, or you think it's not helpful, or not expressed clearly, please tell me! I really want to get better at writing explanations.


I think time is spent trying to map Logic to English because its our more familiar framework for thinking through logical problems. We do it everyday when problem-solving and weighing options, so its only natural that we want a way to make the abstractions of formal Logic into a concrete 'cause and effect' type of explanation via language.

That being said, I agree that one needs to try and resit this urge, otherwise you won't get past the barrier that material implication presents. It really just doesn't make sense at first, and not for a long time of playing with it and using it within larger statements does one start to sus-out some kind of concrete mechanical meaning for its behavior.


My way of 'getting it' emerged only after few weeks, when we spent quite a lot of time on a proof that not (not A) is equivalent to A in propositional logic.

Then I realized that I shouldn't think about these in terms of ideas about reasoning I already had in my head, but to internalize that this is a self-contained, rule-based system, where we have some arbitrary statements given as a base and some arbitrary derivation rules, and our only goal is to be able to find what we are able to derive.

Does this statement have a proof then means "is there a chain of derivations from base that leads to this statement?"

I still got tripped on 'implication' (that in my language means something closer to what equivalence means in logic for some reason) and 'or' (that in my language usually implies either-or), but when I looked these with "Its just a rules system, it doesn't have to mean the same thing I use in general language" I was fine.

Then, of course, the usage crept from maths back to the language, and I have had confused many of my less mathematical friends (i.e. when they ask "if you are in the city, would you go to a pub", and to this, my answer always is "yes", but what they really meant is "Are you in the city and will you go to a pub?")


When I cover the topic, no one ever is confused by T -> F. For the others, I have an example (in the metalanguage, obviously).

Consider "if a number is prime then it isn't a perfect square." Should be true, right? If you can't split it then you can't split it into equal parts.

Now take the number to be 6. That suggests that F -> T should evaluate to T.

Take the number to be 4. That suggests F -> F is T.

And 3 suggests what to do with T -> T.


Your example makes sense but at the same time it's something I like to call "a positive example". If you explain it to new students, they would agree with you and would think they understand material implication. But later if you test their knowledge using one of the counter-intuitive examples, more often than not, they would have a hard time reconciling it with their understanding. (e.g, the one that I posted here: https://news.ycombinator.com/item?id=15115319).


People don't learn things by being told them. For anything that is hard, they learn by gradual approximation.


When I first encountered material implication I thought it was just stupid. False implies anything? Everything implies true? That's not what implies means, there must be some sort of relevancy, surely?

The block for me was treating material implication as if it defined what 'if' meant in natural language. But it doesn't. 'If I am a turtle then the president of the United States is Hillary Clinton' isn't a statement about a relationship between whether or not I'm a turtle and who is the POTUS. It's a formal sentence. It's like a regular expression or something. It has no real meaning, it's just symbols. It's just 'Q Y C' or '$ ~ F' or 'A -> B' in a formal language. We give semantics to those symbols by assigning a truth value to 'A -> B' as a function of the truth values we assign to the symbols A and B.

It seems silly if we substitute A for something clearly false and B is substituted for something also false and clearly unrelated, but that's because when you read the statement above it looks like natural language, and it is natural language, but logic isn't about natural language. It's about formal language.

If you write it as 'I am a turtle -> Clinton is the POTUS' then it becomes a bit more clearly just 'arbitrary false thing -> arbitrary thing'.

If you try to divorce the concept of material implication from the natural language construct of 'if..then..' then you shouldn't have a problem.

Remember, material implication is truth-functional, which means that the truth value of 'A -> B' is just a function of the truth values of A and B (namely that it's true unless A is false and B is true), not whether they're related or relevant to each other.


I loved Paul Tomassi's Logic. If like me you find all the symbols intimidating and fear that you're not mathematically-minded, try it. It's like the iPhone for learning logic. Everything is clearly explained and explained to an extremely accessible degree. It's such a shame that the author died very early, I'll never see it updated or revised.

https://www.amazon.co.uk/Logic-Paul-Tomassi/dp/0415166969/re...


If you're looking for an introduction to classical logic, I highly recommend Peter Kreeft's Socratic Logic. I'm reading it right now.


They don't mention dependent types at all which is unfortunate. IMO, it's the most promising subfield of logic now.


Unrelated; but does anyone have resources like these for other related subjects?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: