Hacker News new | past | comments | ask | show | jobs | submit login

>I think these days with neural nets being better understood perhaps we dont fall into this thought trap so much.

From what I've read, the designers of AI/ML systems are less and less able to definitively explain how the algorithm works, or how the system is going to respond to a given input. I suppose for 'sentient' AI thats the goal, but I think it is a bit scary if we get a result from a system, and nobody can tell you why or how it was computed.




That's kind of what I meant. The inner workings of ai/ml are mysterious to us but we are familiar with the idea of a 'black box' that can do something like 'find a face in this photo' and we know inside the black box there's a tangled network of weighted connections. We don't imagine a cartesian theatre inside the black box. But maybe 50 years ago we might have? So perhaps we are getting better at reasoning how the mind might work. People used to use clockwork as a metaphor for the mind, when clockwork was all they knew. Now we have better metaphors.


> I think it is a bit scary if we get a result from a system, and nobody can tell you why or how it was computed.

You don't need ML for that. That's Wednesday in any corporation.


But with ML, that's the target as that it's doing it's own thing.

Your Wednesday example is just because tribal knowledge is becoming extinct as they lay off the old timers nearing retirement before doing a brain dump. The younger people know they don't know it and rewrite it in a new language, but don't even know what that new language is doing in the background because they've imported so many 3rd party libraries using a newly written class that essentially just reaches out to the original black box but looks new and shiny


Heh, true. I was thinking more about things like china's social credit score which uses ai/ml.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: