Hacker News new | past | comments | ask | show | jobs | submit | brianmwaters_hn's comments login

The problem-solving aspect of climbing (esp. bouldering) is definitely one of the best parts. Very easy to get sucked "into it" once you get started.


Totally anecdotal here, but I would not hesitate to wager that many climbers' grip strengths lie way outside the normal range. The sport has a higher grip exercise-to-cardiovascular exercise ratio (however you might measure that) than probably any other sport that exists today.


Maybe for 'gym rat' climbers. Gyms have larger holds and steeper walls. Climbing in the real world is more about balance and legs/feet. You almost never hang off your hands. Instead you are gripping tiny flakes of rock in an effort to keep weight over your feet. Hands and fingers will get abused in cracks (something gyms dont have) but im not so sure about grip.


Not trying to sound arrogant, but this is simply false. Source: me. I climb 5.13/V10.


And I lead 5.11c walls. It's a big sport.


Yeah, I think people are also confusing grip endurance with grip strength. Pulling a 500+ lb bar off the ground requires more grip strength than climbing on the walls supporting your body weight.


Also, different parts of the finger. Outdoor climbers use tip, indoor climber the first two knuckles. Weightlifter use the whole hand, which is probably best for measured grip.


Another way of building your grip strength is deadlifting raw, although that is anaerobic.


raw ... ?


Without a deadlift suit[1]. Some people also use "raw" to mean without straps as well[2].

[1] https://gometal.com/product/pro-king-deadlift-suit/

[2] https://www.paulcheksblog.com/wp-content/uploads/2014/03/Wri...


Equipped DL puts more strain on the grip simply because the weight is heavier.

The best DL-like movement that specifically targets grip strength is partial DLs off the blocks or in the rack. That way the weight is heavier and you can specifically target the lockout hold where the grip strength is most important DL - wise.


Only ever heard it for straps.


"Zero-day protection" is marketing-speak for what security engineers call "exploit mitigations." Of course they don't prevent exploits; they mitigate them. Pretty typical that the marketing term is an exaggeration of the more accurate engineering one.


Exploitation can certainly be outright prevented. For example, automatic integer overflow checking reduces any integer overflow vulnerabilities to at most a denial of service attack (clean abort). _FORTIFY_SOURCE (including the more dynamic implementation in CopperheadOS) does the same thing for a large subset of buffer overflows, as does -fsanitize=bounds which is globally enabled.


Totally agree with everything you're saying, but I would point out the the article is not about addressing a lay audience, it's about addressing network engineers who are supposedly lacking some historical/technical knowledge of their own field of work.


I expect network engineers to understand what they do day to day. Do they really need to understand the whole history of the field to do that? Mostly not: just specific things necessary for their jobs. Our economy and resources are sort of rigged in that direction. So, we might need to use a preemptive solution where we present various principles, tactics, pro's, and con's to them rather than a full history lesson they'll think is useless.

Ideally, they'll learn it all. I've learned the world is rarely ideal. Need a practical alternative to ideal...


That's just it, those who do not understand history are doomed to repeat it. By not learning about it they will make poor day to day decisions which lead to the exact same mistakes decades later.


It's true. Most of history is probably useless, though. So, you need to know which parts to learn. Most people getting into network engineering don't have 5-10+ years to figure that out on top of work, family, fun, etc. So, the most relevant parts of history of a subfield should be accessible in a way that doesn't require that level of effort. That's like making people studying mathematics, mechanical engineering, etc look through every ancient manuscript and writing to unearth the best principles and techniques for their field day-to-day and project-to-project. Makes no sense.

And this is coming from a guy whose gone to the extreme of studying everything in several sub-fields of IT: over 12,000 academic or professional papers in my archive on top of the books, etc. Skimmed most, fully read some, and re-read a tiny few (gold mine). Took a long time to get to that tiny few. No justification for that except that generation made little to no attempt to get that stuff to me in an accessible way. And now my generation is repeating the mistake for the next.

Or do you really think it's optimal that every learner have to spend 3-5 years per sub-topic digging through the whole of knowledge on it just to find the few things that might pay off?


It's not about spending 3-5 years learning about the history but actually believing and understanding that that history has value. It's the ability to regularly ask the question "is this a solved problem" and actually be willing to work to figure it out. This is what I feel is missing with a lot of the younger engineers I interact with.


"It's the ability to regularly ask the question "is this a solved problem" and actually be willing to work to figure it out. This is what I feel is missing with a lot of the younger engineers I interact with."

I agree with you on that. Many don't have that desirable trait. They'll miss out on greater things because of that. Another thing worth study and effort is figuring out how to teach that mindset to young engineers.

Once they have it, then they need the organized presentation of our fields that I'm arguing for. Both the will/mindset and resources for learning are necessary for best results.


Do you keep track of your gold mine in a list somewhere? Would be interested in looking at it. :)


Before I can wholesale share it, I have to go through to figure out which ones are locked in to ACM/IEEE to exclude them from the list. The rest are [mostly] disorganized because I started pulling them in batches of 20-100 a day. So, there's a little work to be done there before I can publish them.

I've saved your username and email somewhere so I can notify you when I get around to doing that. What topics are your core interests? My main focus is high security IT and software/system verification with a number on programming, hardware design/synthesis, software engineering, networking, databases, filesystems, OS's, and high-performance computing. Might send you a few samples of each in your interest to let you see the kinds of things people overlook. I should have time this week.


As far as computer disciplines go, my core interests are in language design, distributed systems design, software engineering, security, string and graph algorithms, high performance computing, OS and filesystem design. Aside from that I have interests in genomics, nanoscale optics, and AI/ML.

Thanks for the offer!


I sent you an email (the one linked to your HN profile description) on the subject. I hope you get back.


@ brianmwaters_hn

"took so many years and thousands of papers... due to the sheer volume of information. That time wasn't wasted sifting through irrelevant stuff - you're now an expert. On the contrary, I think the writers and organizers of all that material did a great job; both of us owe nearly everything we know to them."

I agree there's a lot of material, the learning process was worth it, and there's even more to gain. That said, the learning process taught me that a tiny, tiny fraction of those papers taught people 95+% of what they need to know for their sub-field. We need that information packaged, well-presented, and widely distributed for each sub-field. People wanting to learn more can volunteer time to do so. All I'm pushing for is that baseline packaged and ready for newcomers with few to no obstacles. Right now, it's hard enough to find that most don't. Gotta change that.


Nick, (this is a reply to your post at the same nesting depth, as we're at the max here)

We're pretty much on the same page on this whole thread, but consider the idea that perhaps the fact that it took so many years and thousands of papers wasn't because the previous generation didn't organize them well, but was simply due the the sheer volume of information. That time wasn't wasted sifting through irrelevant stuff - you're now an expert.

On the contrary, I think the writers and organizers of all that material did a great job; both of us owe nearly everything we know to them.



>Do they really need to understand the whole history of the field to do that?

That marks the difference between being a technician and being an engineer. A technician just needs to know how to do their job, whereas an engineer needs to know why they need the technician to do their job.


I still repeat the same question to your answer. Most engineers don't need to do near as much digging as many IT people do to understand the important stuff behind their field. For instance, despite all that's written on HW design, my research found that most of the key stuff (including insightful wisdom) can be found in under 10 books that properly transferred both knowledge and wisdom with minimal extraneous information. From there, it's years of experimentation to make those things habit and better understand why they're true. I discovered them with some Google perseverance but also because pro's were recommending them while explaining what wisdom each contained.

That could be done way more easily in networking. Should be done if it hasn't already. If it has with key resources, they shouldn't be so obscure with people like that blogger readily linking to them as I've seen in engineering fields. Don't make people wade through endless papers and histories of bureaucracies' decision-making processes to determine one technical report's worth of important design guidelines and justifications. Give them the good stuff in a way they know it's the good stuff.

"If you want your son to throw 50 yards, well... GIVE HIM... 50... YARDS!" (Vault drink commercial)

And, heck, it can even be presented in a time-ordered fashion where they learn some of the past as they go. They might start with a certain type of medium, the protocol for it, the issues, the solutions, and key things to learn from that which might apply to other situations. Then the next and the next. All the way up to samples of cutting edge stuff* at the end to make it memorable with an awe factor and exotic, weird stuff from our history sprinkled throughout just to hold attention (but also teach).

* A book on supercomputing got me more into networking than networking itself. The ultra-low-latency, high-bandwidth, cross-bar switches blew away anything I used. Plus, they in theory let many cheap nodes become a machine with CPU's, RAM, and graphics cards of a SGI Onyx2! To do it, though, I needed to understand the effect of wires/optics, host connectors, host protocols, cross-bar designs, topologies, cache-coherence algorithms, and so on. Quite a lot but quite the motivation. Never got to Onyx2 level on a budget but lessons gave long-lasting capabilities: topology, Beowulf clustering, single system image, clustered filesystems, Active Messages, reliable UDP (eg UDT), and discovering NUMAscale's products after typing the concept (NUMA) and connector (HyperTransport) into Google. One or two badass technologies along with justifications went a long way, eh?

Example would be me studying ancient NSA computers to find one used mercury for RAM. The concept of buffer overflow became more memorable as the story brought to life the consequences they endured: mercury exploding out the computer. Crazy stuff we'll never encounter but I still remember it & it reinforced preventing overflows.


This is an attitude I see more and more of today, and I think it's unfortunate. The slides in question aren't "a pile of acronym data;" they're made of lingo that would be recognizable to anyone who works in the field of networking - even if they don't understand the specifics of the protocols in question.

At any rate, the author's complaint isn't that "kids these days" haven't heard of all these acronyms, it's that they haven't learned any of the technical details behind those acronyms, and that those details are still relevant.

Finally, I have a bone to pick with the "it's the older generations' responsibility to educate the younger generation" idea that I hear so often. At the end of the day, it's everyone's own responsibility to educate themselves, and we all know there's plenty of materials available for that.


"Finally, I have a bone to pick with the "it's the older generations' responsibility to educate the younger generation" idea that I hear so often. At the end of the day, it's everyone's own responsibility to educate themselves, and we all know there's plenty of materials available for that."

I'm on your side of the discussion overall but that's unrealistic. It took me a decade to get so much of this knowledge and wisdom out of papers on programming, OS design, networking, security, etc. I just found some more foundational work in past months that should've been in every classroom for its relevance but nobody's heard of it.

The problem is that the stuff is scattered all over the place and not in pure form at all. There's books, papers, brochures, lectures, etc. These might have good details, fluff, or a varying mixture of each. Many great works can only be found behind paywalls (IEEE, ACM). Others are on academic sites, specific blogs, or places like CiteseerX where you have to know what you're looking for ahead of time.

Our field is anything but a clean, integrated presentation of what really mattered, matters, and might matter. It's a huge, scattered mess that we expect new crowd to just automagically sort through and discover necessary stuff. Prior generations certainly have some responsibility to make that easier rather than harder. I do my part with posts here and elsewhere directing people to specific techs that solved (or nearly so) the problems they are talking about. Need a more thorough solution, though, for the various sub-fields of I.T. before I'll blame the newcomers for prior generation's mess.


Hm, I agree that there's a lot scattered out there, but I hope that there's some room for exploration (and maybe specialization) somewhere in the noise.

I actually have an interesting perspective on this, being completely self-taught, before returning to university as an adult to get a computer science degree.

There are pros and cons to both sides of the self-taught vs. teacher-taught thing, though I'll make my bias clear up front: I spent a lot of years reading books and messing around with stuff; now, when I hear college students complain that a teacher "isn't a clear lecturer" or "didn't answer my question well," my tendency is to say "there's a book and an Internet out there, suck it up and get to studying, buddy," though I realize that attitude isn't perfect for everyone.


" when I hear college students complain that a teacher "isn't a clear lecturer" or "didn't answer my question well," my tendency is to say "there's a book and an Internet out there, suck it up and get to studying, buddy,""

I get that and mostly agree with it. Exception being the people who learn best with the help of others (esp face-to-face). They don't learn the hard stuff well with text but they're valuable once they learn it. The other exception would be the topics where having a pro at hand can greatly simplify the learning process mostly due to nature of topic itself.

In most situations, you're totally right. People just aren't putting in effort. I think the stuff with lots of historical baggage of unknown usefulness seems like unjustifiable effort to many in IT. So, I don't throw them into that category if it's such a thing rather than their own skill set. Now, if they didn't know essential networking skills and griped that nobody told them, I might link to your comment followed by a back hand.


> when I hear college students complain that a teacher "isn't a clear lecturer" or "didn't answer my question well," my tendency is to say "there's a book and an Internet out there, suck it up and get to studying, buddy,"

Which raises the question of what the lecturer is even standing around talking for. Isn't it just a waste of their own and everyone else's time if they aren't good at doing what they attempt to do? If someone is a good researcher and lousy in the classroom, why make them teach classes and waste everyone's time?


I agree with you there. I think Brian probably would, too. His point was that many people let their pursuit of knowledge end there rather than use other resources available to learn what they needed to know. The lecturer becomes an excuse rather than an obstacle on the path to understanding.


There are an order of magnitude more failed technologies than there are successful ones and learning all of the successful ones is already a gargantuan task. Just because some crotchety old guy watched a particular one go up in flames doesn't mean that everyone he talks to about a concept needs to be subjected to an analogy only relevant to one failed subject.

You are misunderstanding me if you think I suggested that old people need to educate young people. I'm suggesting that if concepts cannot be distilled from the technology and discussed on their own merit, don't bring them up because it means you don't actually understand them well enough.

This guy is mad not because people couldn't understand concepts, he is mad that they didn't get his archaic analogies. It's like a car guy calling an engineer stupid for not know what engine was in a 52 Chevy.


So he's a "crotchety old guy" with an analogy relevant to just "one failed subject." You have absolutely no respect for your elders, do you?

Also, the second sentence of your second paragraph makes no sense to me.


>You have absolutely no respect for your elders, do you?

No, I have no respect for people that offer analogies via obtuse references and then look down on you for not getting it.


That is so cool. We're used to thinking of programming as a sort-of abstract job that is that takes a long time to do and gets deployed to everyone. It's really cool to hear about a programmer being able to make an on-the-spot operational impact on a real work site.

Also, I wonder how the hell the thing survived?


Okay, I used to actually do inspections on oil refineries and wind turbines using rope access techniques, so I think this is a good one for me to chime in on.

In rope access, we like to pride ourselves on being the technology that can get the job done faster, safer, and more flexibly than older methods (say, scaffolding or boom lifts). I kind of see drones as having the same advantages, but squared - kind of like LED light bulbs compared to CFL's, compared to incandescent.

However, this Bloomberg article doesn't mention any of the downsides of drone inspection. Last time I checked (and I've been out of the industry for about a year), there were concerns over the quality of the photos taken by drone teams, although there is room for the technology and the skills get better there fast. The real caveat here, though, is the limited usefulness of visual inspection in an oil and gas environment. Most of the work done on-rope is UT (ultrasonic testing), which requires hands-on contact with the structure, and RT (radiographic testing), which involves lethal doses of radiation aimed in specific directions. Obviously, this would be wildly dangerous from an aircraft, and will probably never happen regardless of technology.

So I see a place for drones today in visual inspection of equipment while it's online, but I can't imagine a near-term drone-based technology that would be able to carry out UT. So us rope guys will be in business for a while longer.


If it can fly with enough precision (say, keeping within 4" from the surface, or directly contacting the surface with an instrument; not sure what the requirement is) and the equipment can be made light enough that inspections can be completed without an egregious number of returns to base to recharge, I feel like even ultrasonic testing would be feasible.

As far as RT, it seems like it'd be a much better idea to do that with robots instead of humans who get cancer. I'm assuming that there aren't any other people within a range that would get a significant dose of radiation during this testing, but even if, it would seem to me that automated inspection could be made more precise and safer than humans could do, eventually.

Definitely an interesting time to be alive :)


For UT, the surface needs to be directly contacted with an instrument; but that's not all.

First, insulation must be cut off and disposed of (properly). It's usually tin flashing covering some kind of insulating material. If the insulation is asbestos, then it's not acceptable to have any of it blow off into the wind at all.

Next, the surface has to be polished, usually with a rasp or grinder, so that the probe has clean metal to make contact with.

Then, the probe, coated in ultrasonic-conducting jelly, gets applied to the surface, and must maintain sufficient contact for as long as it takes for the instrument to get a reading (at least a few seconds, and it's not always reliable). This step has been done in the past with robots, albeit in different access environments.

Finally, the insulation gets re-applied, covered in tin flashing, and sealed with caulk.

I'm not saying it's not possible in principle for a flying robot to carry out steps one, two, and four, but I can't forsee a technology able to do this within, say, the next ten years.

As for RT, you have to understand that a lot of refinery inspections are done in what's called turnarounds - where the refinery is turned completely off for a week or two at a time. Tons of contractors are called in to work overtime on top of one another in order to carry out planned maintenance and get the thing back online ASAP (time is money - big money). Obviously, extreme care is taken when doing RT in this kind of environment, because a mistake can be extremely dangerous.

It simply isn't safe, in any universe, with any technology, to put a radiographic source (actual radioactive elements) on a flying object and zip it around in the middle of a refinery turnaround. Period. Maybe during normal operations, but only at the cost of extreme interruption of everybody's work - eg, everybody literally has to exit the entire facility.

EDIT: on second thought, having everyone leave the refinery isn't even possible without turning the thing off. There are operators who need to be on-site at all times, monitoring and making adjustments to the various processes. And sometimes a valve really does need to be turned by hand ;)


> or directly contacting the surface with an instrument

This is more challenging than you might initially think, as conventional multirotors can't easily generate the forces you need to place a tool/sensor against a vertical structure.


I think a large market for drones is just repeated pipeline surveying with a long range imager. As you say I can't imagine drones altogether replacing human inspection on rigs and planets.


It's called an "autoblock" by climbers today in the US: https://en.wikipedia.org/wiki/Autoblock


That appears to be a slightly different knot. The final stage where the bottom loop goes through the top isn't done for an autoblock.


> The final stage where the bottom loop goes through the top isn't done for an autoblock.

That'd be the french knot which only blocks in one direction. The Machard knot which blocks in both directions does not loop through itself, the karabiner goes through both loops to close the knot.


Yes. Among them, Vim, Bash, and OpenBSD, the latter of which supposedly has a very good reputation on security.


I think this is kind of antithetical to the Unix philosophy.

If you add special exceptions for things like "rm -rf /" then you start to wonder, why not add exceptions for other dangerous things, like "find / -delete" and "rm -rf /usr".

In general, most of the basic Unix tools operate off of relatively simple first principles and don't contain exceptions for things like this.


When young I once had the opinion that such exceptions would be a good thing.

That is, until I installed some version of Red Hat that aliased rm to "rm -i", and extracted a few wrong tar files. Then I understood why the shell is that way, and why everybody just clicks "ok" on Windows dialog boxes without reading the alerts. Funny thing is that I lost some important files because I expected the prompt, but pressed "y" 19 times, instead of 18...

Nowadays I Just do backups.


First off, as someone already pointed out GNU rm does fail on `rm -rf /`. Secondly, that one is way more important than trying to protect `find / -delete` and `rm -rf /usr` because it's just way easier to mess up and have a stray slash in your command line.

Case in point, a unix novice coworker of mine came up to me once and said, I think I might have done something wrong. I'm trying to remove an empty directory and it's hanging. Turned out he created a directory in whatever linux workspace gui he was running at the time and accidentally added a space at the end. He didn't notice until he used terminal `ls` to look at it and then noticed it printed like this (with `ls -F`):

  somedir /
So he decided to rm it and start over. Can you guess what he typed?


For the record, GNU's Not Unix.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: