Hacker News new | past | comments | ask | show | jobs | submit | mburney's comments login

I thought that Python is designed to be a teaching language.


ABC was designed as a teaching language. Guido van Rossum was an implementer (but not designer) of the ABC language. When he started Python, he included ideas in it which came from ABC.


Python has all the hype and popularity, but Smalltalk, Pyret, and Racket are far better choices.


As a Smalltalk dev for a couple of years, I feel my biggest moments of enlightenment were not with Smalltalk, but Scheme (i.e., Lisp) and (yes, not a popular opinion) Javascript (Prototype-based language that eschewed rigid class hierarchies).

I still love Smalltalk but it's elegance (message passing FTW) and purity set a bar too high for me, and it still fell afoul of the fact that class hierarchies create a syntax/behavior that require you to re-learn a lot every time you join a new project (too many companies/projects rewired basic class functionality).


When I was 26yo I freelanced while traveling. I had a blast, but if I could change one thing, I would have saved money (or created passive income streams) to travel for a year without having to freelance. Why? Because you can have more fun and adventure if you're flexible with your time and if you have minimal responsibilities. Staying out until 6 AM in a Brazilian disco with 10 new friends that you've just met is a lot more fun if you don't have to be on your laptop three hours later to write code for a company in a different timezone.


Applying economic game theory to a relationship seems like a good way to take all the fun and passion out of it


Not if you're game theorists! :)

But I don't actually understand that objection. Seems analogous to taking Walt Whitman's side in this exchange: http://www.scottaaronson.com/writings/whitman.html


I think the idea is to reduce the amount of time and mental stress around day-to-day tasks to allow for more time for the relationship.


Well... It's more like this: https://xkcd.com/1319/

I might have to concede that the whole thing is untenable unless you actually enjoy geeking out about the game theory and mechanism design and whatnot.


This feels more like a stunt to get people interested in their start-up company. The wife is so far in debt that it looks more like gambling with fake money at a Casino Night since the game theoretic model seems it breaks down when there is cost-free debt.


Unless both participants are geeks who like having fun practicing game theory.


I think it's more that there are eccentrics out there, and they by definition do things differently to the usual methods.


Being homeless and finding a way to release an app is a good example of resourcefulness.


>‘Homeless coder’ saga shows connections matter more than coding skills (washingtonpost.com) 2 points by Libertatea 2 days ago | 0 comments | cached > https://api.thriftdb.com/api.hnsearch.com/items/_search?filt...

Leo The Homeless Coder (media-results) (imgur.com) 1 point by Empathenosis 3 days ago | 1 comments | cached > https://api.thriftdb.com/api.hnsearch.com/items/_search?filt...

> The Real Story of the Homeless Coder (youtube.com) 4 points by danboarder 5 hours ago | 0 comments | cached

those are just the first two pages. Now, My point is : "bravo leo. but enough with this Oprah shit."

IMHO this story is borderline spam.


From my experience most "nerds" are actually way too quick to criticize others. They lack confidence in their social skills so they form an identity around their supposed technical abilities. Then they use that identity to justify bullying, but instead of beating up someone with a diminutive stature they'll post a tweet about how someone is a moron for not using the correct design pattern on some code they posted on github.

This is not confidence -- this is insecurity and arrogance.


A lot of the time it is also just lack of understanding of how others will respond.

A lot of "nerd bullying" is unintended and seem to happen through brutally harsh technical criticism that we often don't realize can affect others really badly.

I used to work with a now well known blogger who is reasonably technical, but not a developer, and I used to fall prey to this with him.

In one meeting, we were chatting while waiting for someone to bring some documents, and he told me "when I first met you, I thought you were a total jerk, because it seemed like in every meeting when I opened my mouth, you'd shoot me down". After a while he realised it wasn't personal, but he still didn't quite understand why I was seemingly picking on him.

And it was true. I did shoot him down a lot. But I was flabbergasted. To me, my criticism was essential technical discussion. What's more, as I told him, the only reason I often criticised his ideas was because they were often good. Good ideas deserve thorough attention. Good ideas deserve criticism, because it is by fixing the rough edges we turn a good idea into something great. I had, and have, a huge amount of respect for his ideas.

The reason I kept my mouth shut when a lot of the rest of the management team came up with suggestions was because I often didn't believe they were interesting enough to be worth it, and I had a good idea for when any of those ideas might get traction enough to be worth shooting down, but mostly the bad ideas just got me to pull back and think about something else.

Sometimes we do bring out the heavy guns for really bad ideas too, but even then there is often a tacit admission of respect on some level, though influence rather than technical proficiency: Only the really insecure or clueless wastes lots of time criticising someone with no influence. We criticise bad ideas incredibly harshly when they come from people who have the influence to push their ideas through regardless.

But that is rarely aimed at someone who would be all that phased. And at least in my environment, the less technical members of the team would often easily recognise and kill those bad ideas without any need for me to pick them apart. More often for me at least, it was people I respected with ideas I respected that got the tough responses, because they got my attention.

Back then, I didn't understand that this kind of harsh language was not taken as impersonal technical discussion aimed at helping to improve their ideas by "normal" people, but as intense personal criticism.

I hope I've improved in that respect, but I still far too often cringe at when I see my "old self" reflected in overly harsh responses that still seem to be well intended.


Coming from a programming background as well, I would like to make a couple points:

It helps to preface constructive criticism with praise. Make sure they understand what you LIKE before you dive into "rough edges".

I find it helps a LOT to phrase constructive criticism as a question. For example:

Do you think it would be faster to use JSON here instead of XML maybe?

Just a thought from a long-time customer-interfacing programmer. Are those normally hyphenated? 8)


Yes, this.

There's a difference between criticizing someone and helping them. The former can lead to the latter, but they don't always (or even often!) go hand-in-hand.


Many philosophers have questioned the scientific definition of "correctness". For example Heidegger who argued that scientific truth is an approximation and measurement of things that we already presume to exist. These discoveries may be correct, and they may be true. But what about the more primordial truth (i.e. being) which we take for granted before we even begin a scientific inquiry? This is what he investigates, so pretty much all scientific definitions of correctness don't apply here. But it is very hard to convince followers of scientism that this investigation has any meaning, because they've already closed their minds to this form of thinking.


> Many philosophers have questioned the scientific definition of "correctness".

This concisely and aptly summarizes the reason for philosophy's low standing among intellectual disciplines.

Philosophers are manifestly unqualified to debate the scientific definition of anything, much less "correctness". Beyond this, a suitable definition is too short to be of interest to a philosopher, someone for whom the number of words uttered is always ranked higher than the intellectual content of each word taken separately.

A scientific idea is "correct" if it can be successfully compared to reality.

How hard is that? I hasten to add that no scientific idea ever becomes true for all time -- all such ideas are subject to falsification by new evidence, by new comparisons to reality.

> But it is very hard to convince followers of scientism...

Ah, yes, the "science is just another religion" gambit. It speaks volumes about the depth of modern philosophical thought.

Philosophers compare their ideas to those of other philosophers. Scientists compare their ideas to reality.


> A scientific idea is "correct" if it can be successfully compared to reality.

But this is similar to saying what is real is what corresponds to reality. Do you see the circularity here?

The philosophers who engage in questioning the "real" are not doing it for the reasons scientists engage in discovering "correct" phenomena. The longing for a deeper meaning and clarity beyond scientific inquiry is a spiritual longing. These philosophers are trying to describe ways in which human beings fit in the world, how we can deal with the groundlessness of our existence, what choices we have in light of the anguish that comes from our mortality.

The problem is that many people view this as a competition against science or "exact" thinking. It is not.

I think this quote from Leo Strauss sums up my point:

"Men are constantly attracted and deluded by two opposite charms: the charm of competence which is engendered by mathematics and everything akin to mathematics, and the charm of humble awe, which is engendered by meditation on the human soul and its experiences. Philosophy is characterized by the gentle, if firm, refusal to succumb to either charm. It is the highest form of the mating of courage and moderation. In spite of its highness or nobility, it could appear as Sisyphean or ugly, when one contrasts its achievement with its goal. Yet it is necessarily accompanied, sustained and elevated by eros. It is graced by nature's grace."


> But this is similar to saying what is real is what corresponds to reality. Do you see the circularity here?

The circularity is in your wording, not in the thing itself. A scientist has an idea, one expressed clearly enough that two or more similarly trained individuals can understand the claim. The idea is tested against reality, in a way (again) that similarly equipped observers can agree that the result means what it seems to mean.

The outcome is either that the original idea is supported by, or falsified by, the comparison to reality. And the distinction between the idea, and its test against reality, is nowhere confused -- not among scientists, anyway.

> The philosophers who engage in questioning the "real" are not doing it for the reasons scientists engage in discovering "correct" phenomena.

That's for sure -- philosophers much prefer arguing about the meaning of reality, to dealing with reality on its own terms. Many modern philosophers, following this trend, slide into deconstructive postmodernism without ever realizing that they've crossed the threshold of absurdity (by posing the argument that all experience is subjective and there is no objective reality as scientists claim, but without realizing that their argument justly applies first to the words they've just uttered).


Oh, come on. This is beneath you surely. Where are these "philosophers," this monolithic horde of abstraction-loving pinheads stuck in the 15th century, too benighted to see the Real Truth right under their noses, too stuck in debates over definitions to think practically about application and science??

You have a cartoon version of philosophy in your mind that is in sore need of remediation. These are precisely the kinds of questions that philosophers have been utterly preoccupied with for centuries. As if Bradley, Dewey, William James, never existed! As if the very idea of a pragmatic, "reality-based" science had not been proposed and debated rigorously for decade upon decade.

As if the subject-object problem wasn't at the core of the foundations of modern science!! Far from being some late, decadent conceit of a handful of disconnected postmoderns, the problem of objective knowledge is at the very core of modern science, at its very foundations, on to the present day.

Have you read a single volume of philosophy in the last decade? Are you not aware that the Grand Poo-Bah of modern philosophy of Science, Popper himself, redefines "Objective Knowledge" to deal with that very problem?

If you are content with a naive, self-contained scientism that remains dogmatically immune to philosophical critique, that's fine. You're certainly not alone. But let's dispense with the sweeping generalizations that have no bearing whatsoever on the reality of the history of philosophy, science or ideas in general.


> Have you read a single volume of philosophy in the last decade?

Do you have an opinion on the utility of suicide? But how can you, without having personally committing suicide? Am I getting thorough to you?

> You have a cartoon version of philosophy in your mind that is in sore need of remediation.

You mean the thesis that philosophers argue for centuries without ever resolving anything? That's hardly controversial.

> These are precisely the kinds of questions that philosophers have been utterly preoccupied with for centuries.

Q.E.D.


Yes, I think you've gotten through to me. You're willing to hold strong opinions regarding the core discipline of the Western Tradition in a state of abject ignorance, on the grounds that actually educating yourself on the topic before forming such an opinion is analogous to committing suicide in order to understand suicide.

That's as bizarre a rationale for willful illiteracy as I can think of. You seem content with it, and I wish you well.


Where to begin with this?

This is naive scientific triumphalism of the first order, utterly illiterate with regards to the philosophical foundations of science, the rigorous and multi-layered debates over the status of scientific statements that have been ongoing for centuries to the present day: from Bacon, Gallileo and Descartes (with deep roots in Lucretius, Plato, Aristotle, Thomas, Epicurus, Heraclitus) through Spinoza, Leibniz and Newton himself, on into such titans as Kant, Hegel, Husserl, Bergson, Heidegger, Russell, Whitehead, Popper, Kuhn, Lakatos and Quine.

And yet! And yet! It was all so simple, gads and forsooth!!

I give you the final word, gentle reader, it was all for naught, these men with their devotion to rigor and clarity of thought, their conversations spanning centuries, their volumes of pointless abstraction, Hegel with his historicism and Kant with his categories, cynical Kuhn and his paradigms, pragmatic Popper with his Worlds 1 and 2, Whitehead with his ingressions pseudo-Aristotelian speculations and Bergson teasing out duration and intuition.

Throw out your Rutledge Encyclopedia: and behold the simplicity they all failed to grasp:

A scientific idea is "correct" if it can be successfully compared to reality.

How hard is that indeed!! Not hard at all. Simplicity itself, devastating in its directness and comprehensive competence, lovely in its completeness. Centuries of debate quelled in a single utterance. Well done.


> This is naive scientific triumphalism of the first order ...

Translation: "Okay, science works. Stop gloating!"

> Centuries of debate quelled in a single utterance.

Yes, and correct.


Look, I sympathize. Philosophy, as the core discipline of human thought, has established beyond any doubt that there are limits to empirical human inquiry, and scientists don't like that idea. The standard response of scientific cheerleaders like, for instance, Lawrence Krauss, is to simply sniff at the entire enterprise, and assume there must be some problem with "those philosophers," rather than entertain the notion that there may, in fact, be limitations to reason and scientific inquiry.

This is fine. If you're too lazy and content in your own unexamined assumptions to read a few hundred pages of the men who did the hard work that gave you your entire livelihood, it will not affect your ability to do practical research.

But let's be clear about some of the practical implications of that position:

* Science does not need to be rational in any meaningful sense. (Go ahead and try to reinvent that idea without falling ass-backwards into the same problems "those philosophers" have been working through for centuries. Go ahead, I'll wait.)

* It does not need to be consistent within its own assumptions.

* Science is de facto "true", simply by virtue of being practiced by someone who claims to be doing science.

* Since we've now abdicated the rigorous discipline of establishing secure definitions, the notion of empirical science is now free-floating, ad hoc, and vulnerable to redefinition at any turn. The only thing required is for a group of men who call themselves "scientists" who have a different agenda to take control of a majority of significant journals or the community at large and impose their definition by fiat.

Philosophy may seem to be arcane and needlessly obsessed with definitions. And when its obscure language intrudes into the hard work of real empirical research, it can seem to be a giant non sequitur that can be easily ignored, as science moves forward (whatever "forward" means, now that we can't establish what is truly salutary and what is not).

But such willful ignorance not only skirts the fact that it was such philosophical examination of the limits and range of human knowledge that actually established science to begin with, that sort of ignorance sets enormously dangerous precedents, a sort of intellectual stare decisis for the future of human inquiry. It opens the door to sophistry, demagoguery and ultimately pure irrationalism.

History is long, and intellectual tyranny is opportunistic. Whatever integrity and above-board intentions you think scientists can maintain in the long-term, on good will alone and not deep self-awareness of core philosophical discipline is absurdly naive. Without a community committed to rationality (which is what philosophy does even if it only sets up negative limitations that seem unsatisfactory to scientists who want carte blanche to do whatever they want), then the entire discipline in the long term most likely will be taken over by non-rational concerns: commercial, military, governmental or even religious.

Absent the basic language provided by philosophy that can give at least some basic definitions and intellectual rigor for what is and isn't "science" (again, good luck reinventing all that), welcome back to the pre-socratic age of sophistry, demagoguery and a new Dark Ages. Just give it a century or so -- buy hey, you won't be around, so why worry?


"These discoveries may be correct, and they may be true."

This is the problem with Philosophy. You can take something correct and true, and disregard it. Worse, it is disregarded in favor of what is essentially an opinion.


Maybe they'd be more open to these ideas if you didn't refer to them as "followers of scientism".


treat it as a learning experience rather than a deliverable product.

But what is the purpose of mentioning this?

If you are recommending that he go ahead with the endeavour, then what does it matter if he ends up with a deliverable product or not.

Also, these kind of projects can have a timeframe of years, regardless of programming skill. If he finishes it, it will be a result of him sticking to it in the long run, not a result of how much C knowledge he had on the first day of the project.



I have seen programmers make the same avoidable mistakes over and over again without learning much from them. Sure, they learn from syntactical errors and perhaps how to avoid specific kinds of errors. But how many programmers will actually re-model their entire coding paradigm in order to improve?


In order to learn from a "mistake" there needs to be some sort of feedback. Many programmers make the same mistake over and over because it hasn't yet come back to seriously bite them in the ass, or because the evidence of it isn't clear. Only then is the mistake corrected and the lesson learnt.

For example, in this day and age of fast processors, abundant memory, large hard-drives, you can't expect a novice to put much emphasis on efficiency. He'll code and will run his program just to see the output appear almost instantly. A veteran will look at it and shake her head, but that's because she knows that back in her days, code this inefficient guaranteed that you'd spend the day in the lab just waiting to see the first letter of the output. Don't expect the newbie to learn that lesson any time soon, with his blazingly fast laptop, on which he's working comfortably from his bedroom. Now, hand him one of those snail computers at his next internship and watch the transformation operate.


If it never comes back to bite you (or anyone else) in the ass, was it really a mistake?


Taking your question literally and in isolation, no.

But the manifestation more relevant to the real world is not having the experience base to realize it's biting you in the ass. Specifically, even though X is working, it could be working better, for less effort. Or another common case is X is not working and you misidentify why ("lazy developers not willing to do the work" instead of "used the wrong paradigm for the job", for instance).


True enough, needless optimization is often unnecessary.

The difference is that an expert will be able to make a good decision about that trade-off. A novice is restricted to the tools and practices they are familiar with - they might come to the same conclusion, but not for the same reasons.


Good advice, but I don't understand why writing a book is the "quickest way to get ridiculed". If a developer wrote a book that provides value, is accurate, and contained very useful code samples, I can't see how that wouldn't increase opportunities.

EDIT: Totally misunderstood that the OP meant it figuratively. Apologies.


So I get that you misunderstood the original intent. But that's not gonna stop me from a small rant on writing a tech book. In a word: don't do it unless you really know what you're getting into. Writing an average/mediocre book is a massive undertaking, and if you want to write a quality book, take that and multiply it by 2 and add 20%.


I think he means the expression "Don't write a book" - meaning keep it short and to the point.


Yeah, that's exactly what I meant. If you wrote an actual book on programming, then that's pretty awesome. Writing a book for a résumé, however, is not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: