Having taken a few (graduate) courses in development economics, I've decided that development is not for me, and my skills simply don't apply. I think the same applies to most people in tech. As the article points out, there are simple low tech things that would make a big difference, and while actually scaling these is hard, it's probably the best value for money way to get improvements. There may be some technology involved (e.g. mosquito nets, or the Green revolution) but it won't be something that a software engineer would be good at building.
I think that to maximize the benefit one has on the world, it's necessary to step back from frantic guilt, and consider the underlying economics. Development takes expertise and money. People in tech can help indirectly by growing the economy, thus providing more money to do these things (and also donate money they earn directly). In my rough estimation, anything in tech has large positive externalities, while finance is slight negative (mostly because a lot of profits in finance are syphoned of people who are irrationally willing to pay for active management of their investment).
Reading this article, I'm reminded of the movement to "design for the other 90%." It is ironic, and also a little sad, that our high tech society thinks that all the developing world needs is more technology. I guess it is easier, and more enjoyable, to design and build things that seem useful, rather than go out and try and change people's behavior. (Not that I think that the movement itself is bad, just that any project in that domain needs to consider how much its product will really be used/adopted.)
> I guess it is easier, and more enjoyable, to design and build things that seem useful,
I agree with this. We (IT people) are so self-centred that it hurts. :) We do "everything" - we "entertain" ourselves with new approaches, frameworks, management ideas - except actually "talking" to and "knowing" our users, their life ands true needs.
It's particularly sad, at least for me, to see UX also being "shaped" like this. We can get essay-like posts on building an icon family, yet it's hard to find good content regarding how company X got to know their users and build what they really wanted.
Most large corporations including Google, Apple, Intel, etc. have an entire department of designers and social scientists (mainly anthropologists) who do ethnographic studies, user studies, etc. in order to understand their customers, new markets, etc. It's really only the startup world that puts less emphasis on user research, but that's in large part because it's resource intensive to do so and many companies have shown that they were able to succeed without it. As startups grow larger and the market more saturated, they must shift from "scratch your own itch" to "user-centered" in order to grow and stay competitive.
Alas, because the startup scene is overwhelmingly centered in the Bay Area and is dominated by white upper-middle-class American men, white-upper-middle-class American male itches get scratched first.
I wonder if there is a grey area where tech solutions can help push slow ideas by easing if not the “talking” then the spreading of ideas.
Let's say a FB group for nurses in Uttar Pradesh, where they can share their thoughts and ideas. You'd use the foot-approach to educate a part of the group. They can then gather experience, validate their learning and spread that. The idea would be that tech makes the idea spread faster, with you having to educate less people by foot.
The impact might be improved by working geographically, so that each nurse talking to the nurses she knows would be unlikely to talk to one already educated. But then, maybe overlapping groups would reinforce the learning. :p
Also, reminds me of an idea of an NGO had to record testimonies by voice calls, and then playing these testimonies to their peers. Still not the same as building a relationship, but probably goes in the same direction.
The article presents the reason for the different adoption rates of anaesthesia and sterlization as visible versus invisible nature of problems. It seems to me that a better characterization would've been "has a positive effect" versus "prevents a negative effect". For an analogy, if some engineer had suggested that cockpit doors be reinforced in pre 9/11 days, that voice would've drowned. If an engineer went ahead and implemented it anyway, that would've called for an enquiry into the additional cost, instead of rewarding the discovery and address of a vulnerability before its consequences hit people. Pilots who keep their cool in emergency situations can get into the news. What about those pilots who've flown again and again without errors? I think these are all related.
It's the timing that's the important part. People learn really quickly not to push their hands against sharp or hot objects, because the pain feedback is instant and biologically determined. And they learn to like sweet and salty things for the same reason - their taste buds tell them to like it. But a much more challenging thing is to learn a concept like "several days after you get injured, an unclean wound will be infected". That knowledge involves visualizing into the far future, and applying knowledge that was told to you, not direct prior experiences.
I'm curious - since we know that a sales-like-process works for this kind of problems, didn't the private sector invent more scalable sales methods - and maybe they will be useful here ?
Reminds me of another article here on the importance of checklists in surgery. Smart and even dedicated people will fuck up even the easiest and cheapest of solutions when the problem is subtle and the solution requires disciplined application. Introduce a simple checklist - a highly sophisticated piece of paper - and results dramatically improve.
There totally is a tradeoff between productivity and discipline. People naturally 'optimize' their workflows - which is usually a good thing. The problem is our pattern recognition sucks at identifying subtleties like the correlation between skipping washing our hands and more patients developing infections.
Technology can help here a bit. I don't have a checklist item to 'check hard drive space' - I have automated monitoring and alerting systems. But if I didn't you bet I'd be moaning and rationalizing while I shouldn't have to log into each system every hour to verify it had free space, free-able memory, low IO, etc. Which is why I work with computers instead of newborn babies. Much easier to automate.
Quite interesting that the thing that doesn't scale turns out to be the most effective. I remember reading an article recommending startups do things that don't scale.
I think that to maximize the benefit one has on the world, it's necessary to step back from frantic guilt, and consider the underlying economics. Development takes expertise and money. People in tech can help indirectly by growing the economy, thus providing more money to do these things (and also donate money they earn directly). In my rough estimation, anything in tech has large positive externalities, while finance is slight negative (mostly because a lot of profits in finance are syphoned of people who are irrationally willing to pay for active management of their investment).