Hacker News new | past | comments | ask | show | jobs | submit | more AndrewThrowaway's comments login

I think I share exactly the same experience and the very same thing helped greatly for me. I was just not diagnosed any postpartum depression but rather good old GAD.

After the birth of a son life changed 180 - no free time, no personal time etc etc. All you can expect. Less sleep, a lot of unknowns and anxieties. Again - expected. Less time with wife. Feelings of down, depression. Some anger problems. But it all looks like everyday life and everybody experiences stuff like that.

Then my father died. So I became "the man of the family".

Then corona hit. Then unknowns at work.

Then one day I was playing with my son and he hit his back and I felt so sick. However x-rays just nothing and he was just fine the next day. Life continues.

And one day I was just chilling and having some alone time and it all hit me like a train. Suddenly I thought I am gonna die. Somehow my heart will fail (my father had heart problems) and basically I could not function as a person. I just wanted that somebody would take me to the hospital or something and just care for me so if my hearts starts failing medics would be near. I couldn't work. One of the worst weeks or so of my life.

Of course my heart was just fine.

All these things were accumulating for some time already. The very same Lexapro helped me a lot. No side effects whatsoever. After a few months I felt like I was actually young again. Things started to improve both at home and at work. Anxiety is 99% gone. I am just so glad I got help. Actually I was forced to as I couldn't function.


How are things after? The few PCP or psychiatrist friends told me that these meds have a huge addiction potential. Could you be without it again?


For me there was no "after" yet. Psychiatrist suggests to be on meds for at least a year or so. Combine with therapy. Going off meds should be very slow and take a month or so of dosage lowering.

I guess body should have be adjusted to increased levels of serotonin and lowering the dose very slowly should help the body to adjust the levels by itself. Nothing that was not done by millions of people already.


Most psych meds are not addictive, in the sense that they don't give you a high to be abused.

You have to titrate off a lot of them, but that's true of a lot of drugs and that's also not addiction.

Benzos _are_ potentially addictive, but if used correctly can help a lot. They also aren't generally prescribed as first-line treatments.


You did the right thing no doubt.

However your experience seems to prevent you from seeing the bigger picture.

"Popping pills" is probably chosen not because people don't want to solve problems and change their lives. It is probably because it is only available solution at that time to actually function as a person and member of the society.

It is great that you had money and event time to go to therapy for 6 months as new parents. A lot of parents of newborns will simply not be able to afford it both financially and because of time constrains. Did you bring your new born to the therapy? How much did it cost? Would every single mother be able to afford 6 months of therapy? Some middle-lower class family? Some family in very rural area? Please try to see a bigger picture here and befriend more people from different paths of life.

People take pain medication for back pain not because they don't want to fix their backs. If anybody could fix their back today instead of taking pills they would do it.


So how do I easily draw? I can see potential to use it in documentation and etc but how do I create e.g. mock? Just typing all the symbols I need to remember?


Many diagrams I am able to draw manually as I'm editing source code on emacs, but for more complicated ones I tend to use this: https://asciiflow.com . To each their own, so pick your favorite ASCII editor.


Or have an LLM do it after indexing the project.


I've asked ChatGPT to render simple alphabet letters in ASCII art and it gives ridiculous results. It seems like the worst skill it has.


Bialetti makes excellent moka pots for induction. I find it even better on induction stove to control the flow as you can basically leave it on "5" and coffee slowly pours out. Much harder with real fire.

https://www.bialetti.com/it_en/moka-induction-rossa.html


At some point I started believing in something like a Gell-Mann Amnesia but for people.

You talk/listen to this seemingly intelligent person who seems to be so confident in his field. Or rather in one of the fields. And then this person starts talking about something completely different and he can't be more wrong.

The question stands now how much doesn't he know about his own field which he also seems to be so confident about.

E.g. some philosopher psychiatrists starts to give some dietary advices of only eating raw meat. You would think and overall intelligent person should not be giving such advices. So can we just ignore him as dietitian or should we also start questioning his views as a psychiatrist.


This is a fundamental truth everybody needs to understand.

Not my idea, heard it somewhere. That the crucial difference between a human being and AI is that if you show a 3 year old kid one picture of a cat, a kid can recognize all other cats. Was it a lion or a tiger.

You can feed ML 5000 pictures of cats and it can recognize a cat in a picture with something like 95% confidence.


> That the crucial difference between a human being and AI is that if you show a 3 year old kid one picture of a cat, a kid can recognize all other cats.

Have you done this test for real? My nephew calls everything which moves but not a human a "dog". In his world there are flying dogs and swimming dogs. Probably if he would see an elephant that would be a big dog, while a giraffe would be a tall dog. Now obviously he will learn the customary categories, but it is definitely not a one-shot thing.

> You can feed ML 5000 pictures of cats and it can recognize a cat in a picture with something like 95% confidence.

This is an area of active research. One term of art is "one-shot learning". The general idea is that you show 5 million things to the AI, but none of them are a harumpf, and it learns how "things" are. And then you show it a single image of a harumpf and tell it that it is a harumpf and it will be able to classify things as harumpf or not from then on.

How great do these things work? They kinda work, but you can still get a phd for making a better one. So they are not that great. But I wouldn't pin my hat on this one "crucial difference between a human being and an AI", because you might get surprised once humans teach the AIs this trick too.


>Have you done this test for real? My nephew calls everything which moves but not a human a "dog". In his world there are flying dogs and swimming dogs. Probably if he would see an elephant that would be a big dog, while a giraffe would be a tall dog.

As long as he can tell a "tall dog" (giraffe) apart from a "swimming dog" (say, a duck) that's still compatible with what the parent says.

It's about recognizing them as distict, and assigning them to the same class of things, the rest is just naming, that is, its at the language and vocabulary level, not at the recognition level.


And how many shots does it take for the kid to learn what "swimming" means or what "tall" means?


Not that many. They can do it at like 2-3, with 1/1000000th the training set, at least words wise.


All the words they have experienced up to that point are part of the training set, as well as all the people and things they have seen.


Even if people around the 3-year old child talk to it 16 hours per day constantly at 150 words per minute, they'd just have around 1GB of text in its training data. And not good quality words even, a lot of it would be variations of mundane everyday chit chat and "whose a cute baby?! You're a cute baby!".

For comparison GPT has like 1TB of text, and they're hundreds of thousands of books, articles, wikipedia, and so on. So already 3 orders of magnitude more.

And of course the "16 hours x 150 words per minute x 3 years" is totally off by a few orders of magnitude itself.


I disagree to an extent with your example, I’m not sure a child would recognise all cats from a single photograph of a cat, and I’m not sure it would be possible to test this (what child first encounters an image of a cat at the age of three?)

As a related example, 3 year old children often cannot reliably recognise basic shapes (letters from the alphabet), and certainly not after a single example. I daresay an ML model would outperform a child in OCR even with significantly less exposure to letter forms in its training.

When a child looks at a picture of a cat at the age of three, they have already learned to recognise animals, faces, fur, object, depths in photographs, the concept of a cat being a physical thing which can be found in three dimensional space, how physical things in three dimensional space appear when represented in 2D… the list goes on.

It’s simply not within our capabilities at the moment to train ML models in the same way.


> That the crucial difference between a human being and AI is that if you show a 3 year old kid one picture of a cat, a kid can recognize all other cats. Was it a lion or a tiger.

I don't understand what you mean with this, as that was certainly not me as a child. As a child I thought cats and dogs might be differently sexed animals of the same species. I also thought that the big cats could be related to each other, though how any of them was related to the housecat was beyond me, given the size difference.


>As a child I thought cats and dogs might be differently sexed animals of the same species.

Sounds irrelevant to the parent's point. Which isn't that you knew what a cat was (with regards to taxonomy or whatever), but that you could tell one from a dog or a fire hydrant.


Actually, recalling my argument in more detail I take back my agreement. The parent's point is that a child will be about to recognize big cats as cats, but dogs as not cats. My point was that as a child, this was not true. In addition, NNs can reliably be trained to recognize housecats vs. not-housecat in 1000 images.


Ah you make a good point regarding my response.


This is an important distinction but it simplifies the situation in my opinion. A 3 year old may only see one cat, but it has probably seen many other things in its life already. Humans likely also have prewired neurology to recognise eyes and other common features. So the analogy is seem more to me like one-shot or few-shot learning with an already partially trained model.


Not in my experience with my children. 3 year olds hide their head under the blanket and think therefore I can't see them. They often lack the context just like ChatGTP and what not. Let's use the hip word they hallucinate more often than not.


Note that approaches such as hyperdimensional computing somewhat undermine your argument.

With HDC, a network learns to encode features it sees in a (many-dimensional, hence the name) hypervector.

Once trained (without cats), show it a cat picture for the first time, it will encode it. Other cat pictures will be measurably close, hopefully (it may requires a few samples to understand the "cat" signature, though).

It reminds me a bit of "LORA" embeddings (is that the proper term?), or just changing the last (fully-connected) layer of a trained neural network.


And if you fold a napkin in a weird way with the correct lighting it might flag it as a tiger too


People watch clouds for that reason too!


"Overthrowing government" is legal. By winning an election.


North Korea would like a word.


I also thought about this. In 5-10 years it would equalize as consecutive years would have less deaths.

Then again you will have excess deaths because of mental health problems that covid causes, medical problems because medical care was not available and etc. These would not equalize.


TBH reads like COBOL.


I think this is a good point and maybe raises a question - are they really quitting or e.g. are being bullied to quit. I have no doubt some part of "quitters" would be exactly that - people "asked to quit".


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: