Hacker News new | past | comments | ask | show | jobs | submit | more asfdsfggtfd's comments login

What does them being male have to do with anything?

Are you implying that being male means that they must be reckless? Or that if they were female they would automatically be careful and gentle? Careful throwing around those negative stereotypes.

EDIT: The driver is likely to be getting shot at. This is going to be an order of magnitude more important than slight gender differences.


https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

"Motor vehicle crash fatalities were higher for males than females in all age groups, while the male population is equal to or less than the female population in all age groups."

Particularly in the 21-30 age group, males had 3x the fatalities of females.


That has little bearing on the cost of fielding/maintaining a vehicle. Fatalities are so rare they won't have an impact maintenance costs unless a sizeable chunk of your fleet is getting driven off a cliff.

In my experience men and women tend to be about equally hard on things. The male outliers tend to create more maintenance work by being hard on chassis/suspension ("hold my beer and watch this"). The female outliers tend to create more maintenance work by keeping quiet about problems (usually having to do with fluids not in their proper places) for far too long.

The most expensive driver is the kind that blindly follows directions/orders into a dumb situation.

Source: Did fleet maintenance in high-school.

edit: The words "in my experience" and "outliers" were used for a reason. I'm not claiming that all men will get a mini-bus airborne if given the opportunity or that all women will ignore an obvious puddle in a parking spot. I am stating what patterns I observed in the noteworthy cases of neglect. There's a million uncontrolled variables, maybe we were just a really scary maintenance department and none of the women wanted to talk to us or something. I'm not claiming that a bunch high school teachers a decade ago is a sample that accurately represents the rest of the population. Maybe the way buses were assigned to teams (pseudorandom) resulted in the observed failure pattern.


I looked for statistics on maintenance cost of equipment for male vs female operators but didn't find any, just the un-documented assertions in https://www.equipmentworld.com/men-vs-women-who-are-the-bett... that basically say women are easier on equipment and specifically

  Attention to detail
  “I’ve always been impressed by how women take care of 
  their machines” Smith says. “They keep them clean and 
  don’t leave trash in the cabs. If there was a drop of 
  oil coming out of a wheel or something small like that 
  they let you know about it.”
I'd be happy to look at any statistics you can provide that show "equally hard on things".


>I looked for statistics on maintenance cost of equipment for male vs female operators but didn't find any, just the un-documented assertions

Ok, well my undocumented assertion is roughly the opposite. Where does that leave us?

>I'd be happy to look at any statistics you can provide that show "equally hard on things".

Post on /r/mechanicadvice, dump out the 80% that were written by someone who's never actually turned a wrench and sift through what's left?

Regarding trash specifically I think the difference between a company vehicle and a personal vehicle is going to make a bigger difference than gender.


> Where does that leave us?

I guess it leaves us at, "Statistically, three times as many young adult male drivers have fatal accidents than equivalent young female drivers, despite having a smaller population."


> What does them being male have to do with anything?

Being young and male correlates strongly with high testosterone, the effects of which you can Google for. Being young and male also correlates with higher incidences of road traffic accidents than most other demographics.


But there are millions of young males in the US alone who are more careful than millions of young females.

The effects of testosterone are far from as simple as you suggest. It tends to lead to higher competitiveness. This is not quite the same as recklessness.

Meanwhile during normal usage the driver (of whatever gender) is likely to be getting shot at. This will tend to have a larger affect than their gender.


> there are millions of young males in the US alone who are more careful than millions of young females

Yes, and millions of young men in the US who are shorter than millions of young women. What’s your point? Young men are still disproportionately more likely to be in road traffic accidents.

Re testosterone, the keywords you should be searching for are “risk taking behaviour”.

I’m curious also what percentage of time you think the average military driver is being shot at is.


Looking at a few studies, the data is much more complex. For example, higher levels of circulating testosterone were associated with lower risk aversion among women (r = −0.1793; P = 0.01), but not among men (P = 0.11). However at comparably low concentrations of testosterone the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender.

Or to make a summery of the summery of the study: there is an association of risk aversion when going from low levels of testosterone to very low levels, but "male with high testosterone" is not supported in that study as increased risk taking behavior vs males with normal levels of testosterone.

(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2741240/)


> Young men are still disproportionately more likely to be in road traffic accidents.

And in the military, given how e.g. the draft has never applied to women. Times are changing, but very, very slowly, and there will always be a gender disparity due to physical differences (e.g. strength, see also: almost every sport)


Being shot is the most important time. This is when the wheel is going to be put under the most stress and it is most important that it performs...


> Meanwhile during normal usage the driver (of whatever gender) is likely to be getting shot at. This will tend to have a larger affect than their gender.

No -- being shot at is 0-2% of your time deployed. The rest is waiting around.


Being shot is the most important time. This is when the wheel is going to be put under the most stress...


Weeeeell, we do have statistics for how civilians drive based on gender.


Note: the driver is unlikely to get shot at. Combat is actually rare.


Being shot is the most important time. This is when the wheel is going to be put under the most stress and it is most important that it performs...


You can set it up so that the user doesn't have to type sudo docker. But they still effectively have root access via docker.

I guess it gives some social pressure not to do superuser things?


It looks like a package manager choice. Dpkg/Apt isn't doing by default what they actually need, so they use Docker instead.


Is it a convenient excuse? Climate change would imply that it is not the salmon industry's fault that salmon populations are changing. The industry person suggested overfishing patterns are the cause. This puts the blame firmly on the fishing industry...


I should rephrase - I suspect that the study was funded by the industry, whereas the industry insider wasn't speaking on official terms.


5% of males are red-green colorblind. Never mix red and green. If you do 2.5% of your audience have just stopped using your product/looking at your presentation.


Red-green was in the context of mixing colors, not color-blindness here. If you e.g. mix oil/tempera colors, red and green give you a really ugly grey color; on the edges between red and green patches you'd get something similar.

I have the opposite issue - my color perception is 100%, so I can distinguish very slight differences in color tones. What might look acceptable to you might be unacceptable to me and vice versa ;-)


Interestingly some of those who are colorblind actually see color contrasts that other people don't see. Color vision is an interesting subject.


As a red-green colourblind male. I'd temper that statement to don't mix red and green for encoding information. Ideally don't use colour as the sole means of encoding information anywhere.

The experience of colourblind people can vary wildly but for me other colours are harder to discern than red vs green. Blues, purples and pinks all are all very tricky.

If however you want to use red and green for some styling element, knock yourself out!


You are in many ways correct. Except that styling often crosses over into encoding information without the styler realizing it.


Exactly, I'm also slightly red-green colorblind, but I have only really have difficulty with green and yellow when I need to differentiate them quickly.


Sorta, if the red-green is purely design and doesn't serve a functional purpose it would be fine.

Ideally if you design some UI, do it in Black and White first, if it remains usable and somewhat pretty, add colors.


> purely design and doesn't serve a functional purpose it would be fine

Or if it does provide functional information (traffic-light style status displays for instance), it is fine if the information is also carried by other means.

> do it in Black and White first ... then add colors.

Exactly. That way anyone with reasonable vision[1] can use the result and it is enhanced for those with good colour vision, rather than requiring good colour vision to be properly usable.

[1] Other serious visual impairments (up to and including complete blindness) should be considered too where possible, of course. Keep contrast high, try to design with screen readers and other common aids in mind, ...


For Screen Readers I've found it helpful to use elinks or lynx to develop the initial HTML.

If your application remains readable and functional on a terminal output, chances are high a screen reader can handle it (since I can't afford any of the screen readers people seem to use, that's what I'm stuck with, aside from following general guidelines on this)

Progressive enhancing, the process of having a working HTML page, enhance with CSS/Images and then enhance with JS, is sadly a rather lost art in modern webdevs.


I reflashed a router five times and the LED was red indicating an error. I was desperate.

Then my brother walked by and told me that in fact it was green....

Some LEDs are a nightmare, and I work in IT


Thats not a binary outcome. Note: deficiency is not blindness.


It is pretty binary. If I can't understand your slides because you have one set of points in red and the other in green I walk away and ignore your presentation.

Colorblindness also has very well established definitions. Note: learning what words mean should come before trying to redefine them.


Those definitions are mostly wrong. On HN (my experience) people associate colorblindness with and only with "red-green color blindness". This is obviously wrong, or at least not well defined.

It's not binary. There is a range from not colorblind to minor defience to fully red-green colorblind


As a single batch job that won't be repeated this doesn't sound like a good candidate for ML. ML is more suited to on-going processes.

Why would you use server instances in Azure to do ML? Something like Google CloudML (I'm sure that the other major cloud providers do managed Tensorflow as well I've just never tried it on their platforms) would be a better fit to a project with only two technical staff. Your two staff probably spent a combined total of one-person-month working on infrastructure.

Your issue with small data is very real. People need to stop trying to do ML on small datasets. The results will be sub-optimal.


When you say ML, you must mean deep learning applied to unstructured data (vision, audio).

ML in general can absolutely be used with small datasets. ML is all about finding the right model complexity to fit to the data to maximize out-of-sample performance. If your dataset is small, all that means is that your model will have to be more crude. A simple cross-validated regularized linear regression or a shallow decision tree are ML models too, and you can usefully apply them to a dataset of just 100 samples.


Or instead of hiring an ML employee, building a model for 100 samples I might as well apply my own business intuition.


It depends on the application. You might want to use the data to verify your intuition, which may not be consistent with the data. Or your intuition may have been based on the very same dataset but overfit to it.

I do think that people that think that ML = big data are mistaken. ML is about making the most of your data, however much of it you have.


Then you're mistaking a trendy term "ML" for "hiring an analyst".


ML is ill-defined. But take most textbooks on ML, and you will find regularized linear regression, decision trees, cross-validation and bootstrapping all in there.

In my view, the main difference between ML and plain statistics is that with the latter, you come up with the appropriate model apriori, and then make sure the data satisfies the assumptions so that you can draw conclusions from the in-sample fit of the model. You control for overfitting by choosing the simplest model that is reasonable - often univariate linear regression.

Whereas with ML, you let the data dictate how complex a model you should use. You choose the appropriate model complexity using techniques like cross-validation, and verify the effectiveness of your model empirically.

ML is often used interchangeably with ANNs which I think is a mistake. Take structured data problems on Kaggle and you would very rarely see ANNs as a major predictor in the winning models.


If the number of dimensions is high, building a model by hand could be extremely difficult, and using ML makes sense.


If you have 100 examples and a high number of dimensions you will end up with a very over-fitted model.


Not necessarily. Cross-validation can give you a valid estimate of out-of-sample performance even if you have more dimensions than samples, and even if some of the features you have are (sporadically) perfectly correlated with the target variable. See https://stats.stackexchange.com/questions/295626/does-cross-...


The terms you're using - they come from Machine Learning, which comes from the same division of mathematics as ordinary statistics. And that's how ML is educated at the universities anyway.

So if you've got an analyst sitting on that issue - then your problem is solved anyway.

So again, why hire somebody with a trendy specialty who is <probably> full of BS, when you can hire someone with some respect for classics?


I never said anything about who to hire.


If your model doesn't beat a reasonable benchmark created using some business intuition then this is a bad idea.


But why would I be running anything other than Linux...


I know nothing about this... However, I can see a little bit of sense to this (not much - and nothing that couldn't be solved a lot better). If you have access to the source code it is very easy to start adding patches. On the other hand if you have access to binaries it is also very easy to start modifying them...


Their argument seems to suggest that I should roll my own openssl, for "risk management".

I mean, these projects typically have thorough tests.


That’s not a very accurate analogy. The situation would be more like “you can only use a pre-compiled, audited version of OpenSSL”, which makes much more sense. Of course you could audit source code as well, but it’s much harder to verify than a binary.


wait, wait, it's harder to audit source code than a binary? wtf?


I should probably clarify: it’s easier to make sure that a binary is the same as a version that has been audited, whereas with source code, there’s much more uncertainty in the build process.


With Yocto, you can audit the recipes (example [0]) being used to build the library.

Yocto all generates checksums for all inputs and outputs, caching intermediate build steps, ensuring consistency in the build process.

Also, Yocto will not compile anything with your local/native gcc toolchain. It will however use your local gcc toolchain to build it's own gcc toolchain, which it will then use to build all the relative recipes. This again ensures build consistency across platforms/machines.

If the argument is purely a "well, I can't audit the produced binaries", I'd argue that you can audit the build system in great detail.

[0] https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/re...


How about the following?

"Sure, we do use the precompiled binaries. We can even prove it because they have the same hash, and our procedure fails if they don't"

How you obtained your binary is immaterial, if you can prove it's the same as the audited one. Surely regulatory people would accept that?


Surely it's more that you should obtain certified binaries of openssl from a third party?


Yeah, I get it.

However, if I use SOUP, I'm responsible for bugs. How would I debug with no source?


> How would I debug with no source?

I don't work in the field, but just because you're using a certified binary doesn't mean no source can ever be associated with it for debugging.

Continuing the example of a certified binary of openssl from a third party (say version 1.1.0h), I'd expect you to be able to debug the certified binary using a version-matched source tarball from the openssl website [1].

[1] https://www.openssl.org/source/openssl-1.1.0h.tar.gz


> For me update to ubuntu 17.10 broke many things

> The worst part is those are the machines I use for work ;(

Why are you using experimental OSs on work machines? The LTS versions exist for a reason.


And if your data is really big then so long as it is structured something like BigQuery lets you carry on using standard SQL queries...


The trick is to do both. Then you have addressed nearly 5% (rounding up a bit) of the US's fossil fuel dependence...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: