Stupid question: Why can't models be trained in such a way to rate the authoritativeness of inputs? As a human, I contain a lot of bad information, but I'm aware of the source. I trust my physics textbook over something my nephew thinks.
Designers should feel the "need to be original", in the sense that every project is different, and can be looked at with fresh eyes.
Perhaps a project is 50% similar to existing project A, 45% similar to existing project B, and 5% novel. Finding this correct balance of copies of A and B, and finding a good solution to the novel part - this process feels "original" in many ways.
Location: UK (can travel, have green card)
Remote: Ok
Willing to relocate: Maybe
Technologies: Any
Resume: Facebook Eng and PM, Angel, Self-taught coder and designer
Email: mark.kinsey@gmail.com
I've done everything from strategy work to on-the-ground engineering.
I found this statement by Sam quite amusing. It transmits exactly zero information (it's a given that models will improve over time), yet it sounds profound and ambitious.
I got the same vibe from him on the All In podcast. For every question, he would answer with a vaguely profound statement, talking in circles without really saying anything. On multiple occasions he would answer like 'In some ways yes, in some ways no...' and then just change the subject.
Maybe an odd take, but I'm not sure what people actually mean when they say "AI terrifies them". Terrified is a strong wrong. Are people unable to sleep? Biting their nails constantly? Is this the same terror as watching a horror movie? Being chased by a mountain lion?
I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.
Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.
Does knowing an AI can do something reduce our enjoyment of doing that thing ourself? I don't know if we have enough data/experience to say.
An example in my own life: an AI I'm sure can perform a piano piece (via MIDI) better than I can, but I've literally never considered this has any bearing on how much I like paying piano.
The person writing here is one of the entities/personalities yes. We're all aware of each other, so that's what makes this different from DID (Dissociative Identity Disorder) or at least it was different back during the initial diagnosis of schizotypal personality disorder in 2001.
Narcissistic behavior in general can be viewed as a defense mechanism to avoid feeling the opposite...small and inadequate. Often we have shame from various childhood experiences of feeling inadequate - failing in various ways in which we felt others judged us for. Feeling humiliated and ashamed etc.
Imho, one great approach is to examine your past for such experiences. Then sort of bring them to mind and reexamine the meaning we assigned at the time.