Hacker News new | past | comments | ask | show | jobs | submit | vouaobrasil's comments login

> The important question is whether what is built provides a benefit, i.e., some gain in a prospective customer’s life.

"The important question is whether it can goad people into playing yet another game of the prisoner's dilemma to introduce even more technology into their lives with relatively little actual benefit"

There, fixed that for you.


Exactly as I predicted -- AI will be used by the already rich to become even richer, because it takes away one of the few things that are held by poorer individuals: the value of their talent. Look at this carefully, all programmers who are involved in creating AI: you are part of the denigration of what makes us human and you are making us more into machines.

That is an excellent point! And one that is often not factored into economic models.

One of my happiest days of my life was when I quit my office job and got a new job where I could work from home with a boss that trusted me to do work at the time that's best for me. Since then, life is much better.

That's the trick, "At a time that's best for me". If I could work like 2 hours in the day and 4-5 at night, I'd have killer productivity, both home and work life.

There are companies that understand this and those that don't. For me, asn an employee, the choice is crystal clear.

And there are managers that understand this and those that don't.

I've been trying to find one but I've had very little success, with software engineering jobs anyways. What other fields offer remote work?

Honestly I could care less about pay, being able to work remote like I previously had was so much better for my mental health and well being.


I quit programming and became a photographer and writer. I'd just look everywhere. I've found the key to being happy in a job seems rather unrelated to the work and more related to the work conditions.

Plus, I still get to use programming at my job to do random tasks like code small wordpress plugins and generated some automated HTML. It's a lot more fun than maintaining large codebases for me.


Most of the time, tags don't seem to cause much problems but sometimes there can be some effects [1].

Conservation efforts can be effective if evidence is needed in order to fight against further land destruction (such as property development), especially when migratory birds use small but important areas for stopovers.

On the other hand, a lot of conservation research is merely clarifying somewhat obvious problems, but the current capitalistic system is very inefficient when it comes to dealing with these problems: in it, you must hit people over the head with the obvious, because people are more attached to money than preserving our ecosystems. If we were smart, we could do more with much less research.

[1] https://www.jstor.org/stable/3802820


You can't solve what you can't measure.

The implicit argument you are making is that something has to be measurable to be worth saving. The most important things in the world you cant measure.

Again, a false dichotomy. The state of many animals is already well known enough, and the problems they face is well known enough. The debate isn't whether to measure or not, but how much measurement occurs and how it reflects the poor state of how conservation works -- which is not the fault of conservationists of course.

Not a fan of the new Apple Intelligence. I don't want gigabytes of code on my machine if that code is generative AI. I'd rather have it optional.

Generative AI has a generally soulless look about it that is easy to spot and hard to explain.

Not really that surprising. The first iPhones were a big upgrade, the next ones less so. All technology reaches the point of diminishing returns, at which point companies find new ways of entrenching new types of technology into our lives, only to repeat the process.

These small incremental AI tools seem in isolation to be helpful things for human coders. But over a period of decades, these interations will eventually become mostly autonomous, writing code by themselves and without much human intervention compared to now. And that could be a very dangerous thing for humanity, but most people working on this stuff don't care because by the time that happens, they will be retired with a nice piece of private property that will isolate them from the suffering of those who have not yet obtained their private property.

If the danger is a high degree of inequality among humans on Earth, we are already there.

Inequality though isn't on/off, and there are degrees. The current existence of inequality isn't a logical dismissal of attempts to prevent it worsening.

And of course, the danger of AI is much greater than just inequality: it is the further reduction of all human beings to cogs in a machine, and that is bad even if we all end up being relatively equal cogs.


Every time it’s the same pattern:

“Autonomous AI is dangerous”

“pfft, are you worried about X outcome? We already had it”


Because it's true? We already had a world war between autonomous AIs called national militaries before they (mostly) learned that total conflict doesn't result in them getting more resources. And autonomous AIs called corporations exploit our planet constantly in paper-clip maximizer fashion. The fact that they are running on meatware doesn't help at all.

And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines? The problems would be orders of magnitude bigger. They can do far more at scale. They can perfectly recall, process and copy all information they’re exposed to. And they don’t have a self preservation instict like people with bodies do.

> They can perfectly recall, process and copy all information they’re exposed to.

I'm not sure if it's better or worse that the computers can do that while the AI running on them get confused and mix things up.

> And they don’t have a self preservation instict like people with bodies do.

Not so sure about that, self preservation is an instrumental goal for almost anything else. Even a system that doesn't have any self-awareness, but is subject to a genetic algorithm, would probably end up with that behaviour.


If we are still talking about AI enhanced companies, it's not that companies evolve. It's that those companies who are unfit, die off. Paul Graham put it humorously in a very old speech I can't find...

I was responding to (what I thought was) a point about AI themselves rather than specifically attached to corporations.

Corporations (and bureaucracies) don't follow the same maths as evolution — although they do mutate, merge, split, share memes, etc., the difference is that "success" isn't measured in number of descendants.

But even then, organisations that last, generally have their own survival encoded into their structure, which may or may not look like any particular individual within also wanting the organisation to continue.


> And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines?

Those things that we see as problems are exactly the things that our civilization relies on. Every time you make a purchase you rely on the fact that meatware AI corporations exploit environment and employees ruthlessly.

Every time you enjoy safety you rely on the fact that meatware military AIs got hellbent on acquiring the most dangerous hardware for themselves and make assessments that not using that hardware in any serious manner is more beneficial to them.

All the development of humanity comes from doing those problematic and horrible things more efficiently. That's why automating it with silicon AI is nothing new and nothing wrong.

I'm afraid that to evolve away from those problems we'd need paradigm shift in what humanity actually is. Because as it is now any AI, meatware or hardware will eventually get aligned with what humans want regardless of how problematic and horrible humans find the stuff they want.

It's a bit like with veganism. Killing animals is horrible but humanity largely remains dependent on that for its protein intake. And any strategic improvements in animal welfare came form new technologies applied to raising and killing animals at scale. In absence of those technologies welfare of animals that could feed growing human population would be far worse.

There's always of course the danger of brief period of misalignment as new technologies come to existence. We paid for industrial revolution with two world wars until the meatware AIs learned. Surprisingly they managed to learn things about nuclear technology with relatively minor loss of life (<1 million). But the overarching motif is that learning faster is better. So silicon AIs are not some new dangerous technology but rather a tool for already existing and entrenched AIs to learn faster of what doesn't serve their goals.


If you are okay with more of it then it is clear on which side of the gap you are

Inequality has always had a breaking point where people revolt. There is no sides, only mechanisms.

Exactly. And it won’t isolate them btw. The AI will affect them too.

15GB for some crappy generative AI. No thanks. Will stick with Monterey as long as possible. This new generative AI thing may just be the thing to make me install Asahi Linux on my computer. Hopefully it's feature-complete enough soon.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: