Hacker News new | past | comments | ask | show | jobs | submit login

The point about Credibility is interesting. There's something about someone who's written a technical book that automatically seems impressive just because writing a book is hard, and it's easy to assume that the person knows a lot. A long time ago I used to do technical review for a publisher on books about code. It was pretty obvious that some authors didn't really know what they were talking about. They knew enough to write something, but there were always a lot of mistakes.

I think many technical books are written to act as a 'proof' that the author is credible and knows their topic well rather than as an exercise in serving the reader or an attempt to make money. Tech authors know that they've not going to make much. It's more an exercise in vanity and improving job prospects. The author doesn't really need to be right. Especially "Beginner's guide to X" or "Learn X in 24 hours" books, experienced and knowledgable developers won't be reading the book to criticise it, and new developers who buy it won't know it's poorly written, so an author can write any old junk and still claim to be an expert. Consequently I've stopped being particularly impressed by people who have authored books on their resume.




It was pretty obvious that some authors didn't really know what they were talking about. They knew enough to write something, but there were always a lot of mistakes.

This is why I'd be very hesitant to self-publish a technical book, even though for fiction I think self-publishing is the right decision 99% of the time. We all make errors, even those of us who do know what we're talking about, when you get to the scale of 100+ kilowords. For a novel, a decent copyeditor can fix up the production values well enough; for a technical work, you would hope the publisher assigned some people to check the work.


for a technical work, you would hope the publisher assigned some people to check the work

That's what technical reviewers do. I did it back in the early 2000s. You get sent a copy of a few chapters, and it's your job to review the code and explanations to find errors. It doesn't pay very well though - you get your name in the front, and a free copy of the book, but nothing much else.


I did technical reviewing for a while. What frustrated me most was when I would say, "this isn't right, it's more nuanced than that" and they would come back and say, "this is a book for beginners so we'll just leave it the way it is".

So now my name is attached to incorrect information, and occasionally someone will message me and tell me why I was wrong and I have to message them back and say, "well I told them to fix it but they ignored me".


I had this experience, too, after tech reviewing quite a few books. I quickly got to the point where I didn't want to even look at the final product, because of all the pointed-out flaws that would still remain, and I'd know the book could have been so much better, except that some editor was in a rush, or lazy.


> That's what technical reviewers do

Well... in theory they should be. I published a book a while back through a publisher, and they assigned me a copy editor and a technical editor. The copy editor was amazing - it was clear she didn't understand any of the technical details, but she spotted flow errors and minor grammatical mistakes in dense technical prose. The technical editor, on the other hand, seemed to have (maybe) skimmed over the content and his only feedback was that he didn't like my writing style very much and left it up to me to verify all of the technical content. I did take it seriously, though, and I am proud to say that very few technical errors have been reported.


I like the idea that all the code in the book goes through a CI so any edited get compiled and unit tested. Certain words and phrases could be given “type annotations” to check consistent use of language.

Not to replace human review but should catch a lot of mistakes


>you would hope the publisher assigned some people to check the work

Depends upon the publisher too, whether they are interested in publishing quality books or have a quantity policy.

>We all make errors, even those of us who do know what we're talking about

I was hesitant to write a book for a long time because I thought I wasn't good enough. I still think I have a long way to go, but I've grown better as a writer with experience. Feedback from users have caught many issues and helped me improve the content.


I once was contacted by a technical book publisher, known for producing a lot of so-so books on specific topics, to be a reviewer for a book on then-hot technology. The book was technically OK, but was put together in such a disorganized and sloppy way that it was a hard slog to look through it. As a reference text, it would have been ok had it been organized as such. This was before stackoverflow, but you could make the same book today by scraping all the stackoverflow questions and answers on a topic, throwing them together and calling it a day.

And it was out of date very quickly.


>And it was out of date very quickly.

This is a problem with "then-hot" technologies. A number of years back, I was approached by a technical publisher to do a book on OpenStack. I wasn't the right person anyway and passed. But even if I had been, by the time a book would have realistically gotten to market, say 12-18 months, it would have been 3 versions back of the current project.


I have published three books (with a publisher), and I can confirm that Credibility was one of the bigger motivations both for writing them, and for going with an established publisher.

For some of them, I started self-publishing with leanpub and was later shepherded into the publisher, and I got the impression I could make at least the same amount of money on leanpub.


You just need to put a camel on the front of your next book and it'll sell millions.


I have reviewed so many technical books that were simply bad copies of the documentation. And that by authors who have written 10 books.

And the (rather big) publisher didn't seem to care at all.


In technical book publishing, the publisher is a good signal as to the quality of the contents inside. I have never seen a worthwhile book from APress.

10 books from an author of technical books is a negative signal. There's not a lot of money in writing technical books so there's an incentive to pump out books without concern for quality. I remember trying to learn C++ in the 90s and nearly every book I read was complete garbage (the authors seemed to think that C++ was C with different comment syntax and using cout << in place of printf). It wasn't until I read the first STL book that things finally clicked.


"Numerical Python" by Johansson was an exception for me.


Interviewed an author of a programming book. We were very excited about it.

He knew crap about programming. Was an excellent writer.


> Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.

-- Geoffrey Hinton

I've found that Hinton's experience with publishing holds doubly true for technical interviews, and am always surprised often people in tech refuse to question their own interview process rather than assume that everyone that doesn't pass it must be an idiot, independent of your prior expectations.

While it is very possible that someone who is a great writer on technical topics is not a great match for your team, I really don't believe that this person "knew crap" about programming. It is virtually impossible to write well about a subject you don't understand.

Again, it wouldn't surprised me at all that an expert on a topic might not be a good fit for your role, take Scott Meyers as an example. He's frequently admitted that he has little software engineering experience, and is not a software engineer. You should probably not hire Scott Meyers as a software dev. But if your conclusion after interviewing him was that he "knew crap about C++" I would read that as an implicit critique of your interview process, not Scott Meyers.

Based on my experience interviewing, the vast majority of data scientist interviewers would quickly write-off Hinton as someone who "knows crap" about data science because they very likely would not understand the answers that Hinton is giving.

Unfortunately, in tech hiring these days, true expertise is far more often then not a liability.


I get what your saying, and have been given bad interview tests before.

My baseline test is to swap keys and values in a [hash/map/dictionary]. In any language of their choice. So [a=>1, b=>2, c=2] Becomes [1=>a, 2=>[b,c]]

75% fail completely. Some struggle but pass.

Others complete it in a minute and are confused as to why such an easy test.


Why would anyone consider an entry level title as a sign of credibility at this point? I remember picking up such books a quarter century ago where it was abundantly clear the author either had very little knowledge on the subject or the publisher was simply trying to profit off of the latest fad. That isn't to say that a book cannot be used as proof of knowledge, communications skills, etc.. In some ways it may be better than most forms of proof since it can actually be verified. On the other hand, it's useless unless the quality is actually assessed.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: