Imagine being a young American right now, trying to find purpose.
If you turn on any screen, all you get is AI grifters telling you that whatever your purpose may be, it's surely going to be replaced by AI soon™.
If (when) that doesn't materialize, you can bet there's a visa designed to make sure your chosen career path will have its wages eroded into the ground by the time you're ready to enter the market.
If you even manage to enter the market, that is. That's becoming an increasingly big if, because your country's colleges are enrolling more competition from foreigners than ever, and many of them will be taking advantage of the F-1 visa/STEM opt (now the primary route into the country for 71% of H1-Bs[2]), in order to compete directly against you for entry level jobs.
That's if you even really get to compete. Increasingly, middle management in tech is being dominated by foreigners, many of whom practice extreme ethnic nepotism when hiring. Hope you're prepared to read job listings in rural towns' physical newspapers. Get your tongue ready to lick a lot of stamps, because you'll have to find job postings in rural towns' physical newspapers, which only list a PO Box for you to mail your resume to[3].
Which fields have they completely transformed? How was it before and how is it now? I won't pretend like it hasn't impacted my field, but I would say the impact is almost entirely negative.
Everyone who did NLP research or product discovery in the past 5 years had to pivot real hard to salvage their shit post-transformers. They're very disruptively good at most NLP task
edit: post-transformers meaning "in the era after transformers were widely adopted" not some mystical new wave of hypothetical tech to disrupt transformers themselves.
They are now trying to understand why transformers are so good.
That's the thing with deep learning in general, people don't really understand what they are doing. It is a game of throwing stuff at the wall and see what sticks. NLP researchers are trying to open up these neural networks and try to understand where the familiar structures of language form.
I think it is important research. Both for improving models and to better understand language. Traditional NLP research is seen as obsolete by some but I think it is more relevant than ever. We can think of transformer-based LLMs as a life form we have created by accident and NLP researchers as biologists studying it, where companies like OpenAI and DeepSeek are more like breeders.
Sorry but you didn't really answer the question. The original claim was that transformers changed a whole bunch of fields, and you listed literally the one thing language models are directly useful for.. modeling language.
I think this might be the ONLY example that doesn't back up the original claim, because of course an advancement in language processing is an advancement in language processing -- that's tautological! every new technology is an advancement in its domain; what's claimed to be special about transformers is that they are allegedly disruptive OUTSIDE of NLP. "Which fields have been transformed?" means ASIDE FROM language processing.
other than disrupting users by forcing "AI" features they don't want on them... what examples of transformers being revolutionary exist outside of NLP?
I think you're underselling the field of language processing - it wasn't just a single field but a bunch of subfields with their own little journals, papers and techniques - someone who researched machine translation approached problems differently to somebody else who did sentiment analysis for marketing.
I had a friend who did PhD research in NLP and I had a problem of extracting some structured data from unstructured text, and he told me to just ask ChatGPT to do it for me.
Basically ChatGPT is almost always better at language-based tasks than most specialized techniques for the specific problems the subfields meant to address, that were developed over decades.
That's a pretty effing huge deal, even if it falls short of the AGI 2027 hype
I think they meant fields of research. If you do anything in NLP, CV, inverse-problem solving or simulations, things have changed drastically.
Some directly, because LLMs and highly capable general purpose classifiers that might be enough for your use case are just out there, and some because of downstream effects, like GPU-compute being far more common, hardware optimized for tasks like matrix multiplication and mature well-maintained libraries with automatic differentiation capabilities. Plus the emergence of things that mix both classical ML and transformers, like training networks to approximate intermolecular potentials faster than the ab-initio calculation, allowing for accelerating molecular dynamics simulations.
Transformers aren't only used in language processing. They're very useful in image processing, video, audio, etc. They're kind of like a general-purpose replacement for RNNs that are better in many ways.
As a professor and lecturer, I can safely assure you that the transformer model has disrupted the way students learn - iin the literal sense of the word.
The goal was never to answer the question. So what if it's worse. It's not worse for the researchers. It's not worse for the CEOs and the people who work for the AI companies. They're bathing in the limelight so their actual goal, as they would state it to themselves, is: "To get my bit of the limelight"
>The final conversation on Sewell’s screen was with a chatbot in the persona of Daenerys Targaryen, the beautiful princess and Mother of Dragons from “Game of Thrones.”
>
>“I promise I will come home to you,” Sewell wrote. “I love you so much, Dany.”
>
>“I love you, too,” the chatbot replied. “Please come home to me as soon as possible, my love.”
>
>“What if I told you I could come home right now?” he asked.
>
>“Please do, my sweet king.”
>
>Then he pulled the trigger.
Reading the newspaper is such a lovely experience these days. But hey, the AI researchers are really excited so who really cares if stuff like this happens if we can declare that "therapy is transformed!"
It sure is. Could it have been that attention was all that kid needed?
I'm not watching a video on Twitter about self driving from the company who told us twelve years ago that completely autonomous vehicles were a year away as a rebuttal to the point I made.
If you have something relevant to say, you can summarize for the class & include links to your receipts.
Your comebacks aren't as clever as you give yourself credit for. As an admitted 11-year-old, aren't you a little too young to be licking Elon Musk's boots, or posting to this discussion even?
So, unless this went r/woosh over my head....how is current AI better than shit post-transformers? If all....old shit post-transformers are at least deterministic or open and not a randomized shitbox.
Unless I misinterpreted the post, render me confused.
I wasn't too clear, I think. Apologies if the wording was confusing.
People who started their NLP work (PhDs etc; industry research projects) before the LLM / transformer craze had to adapt to the new world. (Hence 'post-mass-uptake-of-transformers')
I guess. That's why I added the "unless I am mis-interpreting", still got downvoted for it because I guess it was against AI. The wording was confusing but so was my understanding of it as a non-native speaker. Shit happens.
in the super public consumer space, search engines / answer engines (like chatgpt) are the big ones.
on the other hand it's also led to improvements in many places hidden behind the scenes. for example, vision transformers are much more powerful and scalable than many of the other computer vision models which has probably led to new capabilities.
in general, transformers aren't just "generate text" but it's a new foundational model architecture which enables a leap step in many things which require modeling!
Transformers also make for a damn good base to graft just about any other architecture onto.
Like, vision transformers? They seem to work best when they still have a CNN backbone, but the "transformer" component is very good at focusing on relevant information, and doing different things depending on what you want to be done with those images.
And if you bolt that hybrid vision transformer to an even larger language-oriented transformer? That also imbues it with basic problem-solving, world knowledge and commonsense reasoning capabilities - which, in things like advanced OCR systems, are very welcome.
What exactly are you suggesting- that the SAT solver example given in the paper (applied the HP model of protein structure), or an improvement on it, could produce protein structure prediction at the level of AlphaFold?
This seems extremely, extremely unlikely for many reasons. The HP model is a simplification of true protein folding/structure adoption, while AlphaFold (and the open source equivalents) works with real proteins. The SAT approach uses little to no prior knowledge about protein structures, unlike AlphaFold (which has basically memorized and generalized the PDB). To express all the necessary details would likely exceed the capabilities of the best SAT solvers.
(don't get me wrong- SAT and other constraint approaches are powerful tools. But I do not think they are the best approach for protein structure prediction).
(Not the OP) If we've learned one thing from the ascent of neural nets is that you have no idea whether something works or not until you have tried it. And by "tried it" I mean really, really gave it a go as hard as possible, with all resources you can muster. The industry has thrown everything it's got on Transformers, but there are many other approaches that have at least as promising empirical results and much better theoretical support and have not been pursued with the same fervor, so we have no idea how well or bad they'd do against neural nets, if they were given the same treatment.
Like the OP says, it's as if such approaches don't even exist.
Do you understand the relevant fundamental difference between SAT and neural net approaches? One is a machine learning approach, the other is not. We know the computational complexity of SAT solvers, they're fixed algorithms. SAT doesn't learn with more data. It has performance limits and that's the end of the story. BTW, as I mentioned in my other comment, people have been trying SAT solvers in the CASP competition for decades. They got blown away by transformer approach.
Such approaches exist, and they've been found wanting, and no amount of compute is going to improve their performance limits, because it isn't an ML approach with scaling laws.
This is definitely not some unfair conspiracy against SAT, and probably not against the majority of pre-transformer approaches. I am sympathetic to the concern that transformer based research is getting too much attention at the expense of other approaches.
However, I'd think the success of transformer makes it more likely than ever that proven-promising alternative approaches would get funding as investors try to beat everyone to the next big thing. See quantum computing funding or funding for way out there ASIC startups.
TL;DR I don't know what is meant by the "same treatment" for SAT solvers. Funding is finite and goes toward promising approaches. If there "at least as promising" approaches, go show clear evidence of that to a VC and I promise you'll get funding.
I don't really understand what point you or parent are trying to make. SAT approaches have been used in CASP, an open competition for protein structure prediction. They have been trying for decades with SAT. The transformer based models blew every approach out of the water to the point of approaching experimental resolution.
Why am supposed to pretend SAT is being treated unfairly or whatever you guys are expounding? Based on your response and the parent's, don't think you'd be happy if SAT approaches WERE cited.
Maybe you and parent think every preexisting approach hasn't been proven to be inferior to the transformer approach until some equivalent amount of compute has been thrown at them compared to the transformer approach? That's the best I can come up with. There is no room for 'scaling' gains with SAT solvers that will be found with more compute, it's not an ML approach. That is, it doesn't learn with more data. If you mean something else more specific I'd be interested to know.
In computer vision transformers have basically taken over most perception fields. If you look at paperswithcode benchmarks it’s common to find like 10/10 recent winners being transformer based against common CV problems. Note, I’m not talking about VLMs here, just small ViTs with a few million parameters. YOLOs and other CNNs are still hanging around for detection but it’s only a matter of time.
Can it be that transformer-based solutions come from the well-funded organizations that can spend vast amount of money on training expensive (O(n^3)) models?
Are there any papers that compare predictive power against compute needed?
You're onto something. BabyLM competition had caps. Many LLM's were using 1TB training data for some time.
In many cases, I can't even see how many GPU hours or what size cluster of what GPU's the pretraining required. If I can't afford it, then it doesn't matter what it achieved. What I can afford is what I have to choose from.
Spam detection and phishing detection are completely different than 5 years ago, as one cannot rely on typos and grammar mistakes to identify bad content.
Spam, scams, propaganda, and astroturfing are easily the largest beneficiaries of LLM automation, so far. LLMs are exactly the 100x rocket-boots their boosters are promising for other areas (without such results outside a few tiny, but sometimes important, niches, so far) when what you're doing is producing throw-away content at enormous scale and have a high tolerance for mistakes, as long as the volume is high.
Robocalls. Almost all that I receive are AI's. It's aggrevating because I'd have enjoyed talking to a person in India or wherever but I get the same AI's which filter or argue with me.
I just bought Robokiller. I habe it set to contacts cuz the AI's were calling me all day.
It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention. This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop. Curation is performed by the environment and the passage of time, which reveals consequences. LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Therefore, the correct attitude to take regarding LLMs is to create ways for them to receive useful feedback on their outputs. When using a coding agent, have the agent work against tests. Scaffold constraints and feedback around it. AlphaZero, for example, had abundant environmental feedback and achieved amazing (superhuman) results. Other Alpha models (for math, coding, etc.) that operated within validation loops reached olympic levels in specific types of problem-solving. The limitation of LLMs is actually a limitation of their incomplete coupling with the external world.
In fact you don't even need a super intelligent agent to make progress, it is sufficient to have copying and competition, evolution shows it can create all life, including us and our culture and technology without a very smart learning algorithm. Instead what it has is plenty of feedback. Intelligence is not in the brain or the LLM, it is in the ecosystem, the society of agents, and the world. Intelligence is the result of having to pay the cost of our execution to continue to exist, a strategy to balance the cost of life.
What I mean by feedback is exploration, when you execute novel actions or actions in novel environment configurations, and observe the outcomes. And adjust, and iterate. So the feedback becomes part of the model, and the model part of the action-feedback process. They co-create each other.
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing." These problems are largely the result of platform optimization for engagement and SEO competition for attention.
They didn't create those markets, but they're the markets for which LLMs enhance productivity and capability the best right now, because they're the ones that need the least supervision of input to and output from the LLMs, and they happen to be otherwise well-suited to the kind of work it is, besides.
> This isn't unique to models; even we, humans, when operating without feedback, generate mostly slop.
I don't understand the relevance of this.
> Curation is performed by the environment and the passage of time, which reveals consequences.
It'd say it's revealed by human judgement and eroded by chance, but either way, I still don't get the relevance.
> LLMs taken in isolation from their environment are just as sloppy as brains in a similar situation.
Sure? And clouds are often fluffy. Water is often wet. Relevance?
The rest of this is a description of how we can make LLMs work better, which amounts to more work than required to make LLMs pay off enormously for the purposes I called out, so... are we even in disagreement? I don't disagree that perhaps this will change, and explicitly bound my original claim ("so far") for that reason.
... are you actually demonstrating my point, on purpose, by responding with LLM slop?
LLMs can generate slop if used without good feedback or trying to minimize human contribution. But the same LLMs can filter out the dark patterns. They can use search and compare against dozens or hundreds of web pages, which is like the deep research mode outputs. These reports can still contain mistakes, but we can iterate - generate multiple deep reports from different models with different web search tools, and then do comparative analysis once more. There is no reason we should consume raw web full of "spam, scams, propaganda, and astroturfing" today.
For a good while I joked that I could easily write a bot that makes more interesting conversation than you. The human slop will drown in AI slop. Looks like we wil need to make more of an effort when publishing if not develop our own personality.
> It seems unfair to call out LLMs for "spam, scams, propaganda, and astroturfing."
You should hear HN talk about crypto. If the knife were invented today they'd have a field day calling it the most evil plaything of bandits, etc. Nothing about human nature, of course.
Given that we can train a transformer model by shoveling large amounts of inert text at it, and then use it to compose original works and solve original problems with the addition of nothing more than generic computing power, we can conclude that there's nothing special about what the human brain does.
All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle. Everything after that will amount to detail work.
If the brain only uses language like a sportscaster explaining post-hoc what the self and others are doing (experimental evidence 2003, empirical proof 2016), then what's special about brains is entirely separate from what language is or appears to be. It's not even like a ticker tape that records trades, it's like a disengaged, arbitrary set of sequences that have nothing to do with what we're doing (and thinking!).
Language is like a disembodied science-fiction narration.
> we can conclude that there's nothing special about what the human brain does
...lol. Yikes.
I do not accept your premise. At all.
> use it to compose original works and solve original problems
Which original works and original problems have LLMs solved, exactly? You might find a random article or stealth marketing paper that claims to have solved some novel problem, but if what you're saying were actually true, we'd be flooded with original works and new problems being solved. So where are all these original works?
> All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle
What experience do you have that caused you to believe these things?
No, the burden of proof is on you to deliver. You are the claimant, you provide the proof. You made a drive-by assertion with no evidence or even arguments.
I also do not accept your assertion, at all. Humans largely function on the basis of desire-fulfilment, be that eating, fucking, seeking safety, gaining power, or any of the other myriad human activities. Our brains, and the brains of all the animals before us, have evolved for that purpose. For evidence, start with Skinner or the millions of behavioral analysis studies done in that field.
Our thoughts lend themselves to those activities. They arise from desire. Transformers have nothing to do with human cognition because they do not contain the basic chemical building blocks that precede and give rise to human cognition. They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
[1] Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain, 106(3), 623-642.
[2] Soon, C. S., Brass, M., Heinze, H. J., & Haynes, J. D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11(5), 543-545.
[3] Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507-513. (This paper reviews the "wanting" vs. "liking" distinction, where unconscious "wanting" or desire is driven by dopamine).
[4] Kavanagh, D. J., Andrade, J., & May, J. (2005). Elaborated Intrusion theory of desire: a multi-component cognitive model of craving. British Journal of Health Psychology, 10(4), 515-532. (This model proposes that desires begin as unconscious "intrusions" that precede conscious thought and elaboration).
If anything, your citation 1, along with subsequent fMRI studies, backs up my point. We literally don't know what we're going to do next. Is that a hallmark of cognition in your book? The rest are simply irrelevant.
They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
What makes you think you're not arguing with one now?
You are not making an argument, you are just making assertions without evidence and then telling us the burden of proof is on us to tell you why not.
If you went walking down the streets yelling the world is run by a secret cabal of reptile-people without evidence, you would rightfully be declared insane.
Our feelings and desires largely determine the content of our thoughts and actions. LLMs do not function as such.
Whether I am arguing with a parrot or not has nothing to do with cognition. A parrot being able to usefully fool a human has nothing to do with cognition.
I was just saying that it's fine if you don't accept my premise, but that doesn't change the reality of the premise.
The International Math Olympiad qualifies as solving original problems, for example. If you disagree, that's a case you have to make. Transformer models are unquestionably better at math than I am. They are also better at composition, and will soon be better at programming if they aren't already.
Every time a magazine editor is fooled by AI slop, every time an entire subreddit loses the Turing test to somebody's ethically-questionable 'experiment', every time an AI-rendered image wins a contest meant for human artists -- those are original works.
Heck, looking at my Spotify playlist, I'd be amazed if I haven't already been fooled by AI-composed music. If it hasn't happened yet, it will probably happen next week, or maybe next year. Certainly within the next five years.
> The International Math Olympiad qualifies as solving original problems, for example. If you disagree, that's a case you have to make. Transformer models are unquestionably better at math than I am. They are also better at composition, and will soon be better at programming if they aren't already.
No, it does not. You're just telling me you've never seen what these problems are like.
> Every time a magazine editor is fooled by AI slop, every time an entire subreddit loses the Turing test to somebody's ethically-questionable 'experiment', every time an AI-rendered image wins a contest meant for human artists -- those are original works.
That's such an absurd logical leap. If you plagiarize a paper and it fools your English teacher, you did not produce an original work. You fooled someone.
> Heck, looking at my Spotify playlist, I'd be amazed if I haven't already been fooled by AI-composed music.
Who knows, but you've already demonstrated that you're easy to fool, since you've bought all the AI hype and seem to be unwilling to accept that an AI CEO or a politician would lie to you.
> If it hasn't happened yet, it will probably happen next week, or maybe next year. Certainly within the next five years.
I can pull numbers out of my ass too, watch! 5, 18, 33, 1, 556. Impressed? But jokes aside, guesses about the future are not evidence, especially when they're based on nothing but your own misguided gut feeling.
No they dont. Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
WTAF? Maybe you're new here, but the term "hallucinate" came from a very human experience, and was only usurped recently by "AI" bros who wanted to anthropomorphize a tin can.
>Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
>Machines possess no such characteristic.
"AI" will say whatever you want to hear to make you go away. That's the extent of their "characteristic". If it doesn't satisfy the user, they try again, and spit out whatever garbage it calculates should make the user go away. The machine has far less of an "idea" what it's saying.
It’s had an impact on software for sure. Now I have to fix my coworker’s AI slop code all the time. I guess it could be a positive for my job security. But acting like “AI” has had a wildly positive impact on software seems, at best, a simplification and, at worst, the opposite of reality.
I've tried to make AI work but a lot of times the overall productivity gains I do get are so negligible that I wouldn't say it's been transformative for me. I think the fact that so many of us here on HN have such different experiences with AI goes to show that it is indeed not as transformative as we think it is (for the field at least). I'm not trying to invalidate your experience.
If you're being honest, I bet your codebase is going to shit and your skills are in rapid decline. I bet you have a ton of redundant code, broken patterns, shit tests, and your coworkers are noticing the slop and getting tired of fixing it.
Eventually, your code be such shit that Claude Code will struggle to even do basic CRUD because there are four redundant functions and it keeps editing the wrong ones. Your colleagues will go to edit your code, only to realize that it's such utter garbage that they have to rewrite the whole thing because that's easier than trying to make sense of the slop you produced under your own name.
If you were feeling overwhelmed by management, and Claude Code is alleviating that, I fear you aren't cut out for the work.
Just learned that. Okay, since I’m paying even more for this, in the form of whatever corrupt advantages the oligarch/big corp donors will receive from the administration, I assume I’ll be invited to the parties there?
I have a 2018 Model 3 and in 7 years I have had no issues with reliability. The charge port is a bit wonky and stays up/down, sometimes I have to force it up. That’s the only problem in 7 years, and it’s just a small occasional annoyance that hasn’t bothered me enough to have fixed.
Battery life is still great. I can still easily take cross country road trips. Other than the initial battery life hit you take when the car is new, there isn’t enough degradation for me to even be aware of.
To be fair, a 2014 Tesla was a very early iteration. My 2018 Model 3 has had no issues at all in 7 years, as far as I can tell it operates the same as it did when it was new, aside from the initial battery life hit you take when it’s brand new.
It prevents Americans from noticing and voting in response to the actual problem, which is an aggressive and systematic effort to undercut American wages at all levels of the working class, with immigrants, both legal and non-legal. They've been trying to blame it on AI, and when that doesn't work, they call you a racist (conveniently ignoring that Americans are not all one race).
Man, your anger is just pointed in the wrong direction here.
Be mad at the CEO & hedge fund managers making millions of dollars a year by exploiting the working classes, who will happily move all the jobs overseas tomorrow if they can't hire in America. Not your fellow worker who was born on the other side of an imaginary line.
You're not losing wealth because of the Honduran guy mowing your neighbor's grass. You're losing wealth because the top 1% is accumulating an ever larger share of total wealth.
Can you quote a any specific portion of my comment from which you inferred anger?
Also, do you not understand that reducing an American person’s wages, by having a Honduran guy come into the country and do his job for half the price, then use that to compete with him for an apartment, increasing his cost of living, and reducing the money Big Corp, Inc has to spend on labor, is a direct transfer of wealth out of your pocket, and into their pockets. It’s one of many ways rampant immigration is a war on the middle class, and yes, it is Big Corp, Inc, and the institutions that own it, who are doing this. I am not angry at the Honduran. I am critical of the people who gaslight us into thinking that allowing him and millions like him to come here and work for less money than their American counterparts, is somehow good for us.
People will say Americans don’t want those jobs. But here’s the rub - Americans did want those jobs, back when they paid enough to support a family. Americans also do want trucking jobs, and tech jobs, and medical jobs, and all jobs. And we’re forced to do them for a vanishingly small wage, while we watch our futures disappear, and the hope of ever securing financial security disappears.
> by having a Honduran guy come into the country and do his job for half the price, then use that to compete with him for an apartment, increasing his cost of living
So the immigrants are making half the income, but also paying more for rent? So your thesis rests on the notion that the Honduran is paying like 80% of his income in rent?
This is pretty straightforward economics. The Honduran guy doesn’t need to compete directly with him for housing to contribute to an overall increase in the cost of housing. I assume you are an adult, and since you’re on Hacker News, you’re probably an educated one. It’s crazy to me that I would have to explain such a basic economic concept to an educated adult.
It's crazy to me that you're still blaming the poorest, most exploited, least powerful person in the economy for the problems instead of the people actually responsible.
Since you moved the goalposts after I pointed out the absurdity of suggesting that the underpaid Honduran laborer is also bidding up the price of housing to a broader, indirect impact, I'll go ahead and point out that your argument still fails there. That low-cost labor you're complaining about? That makes construction of housing less expensive. The migrants are building more houses than they're occupying.
Be mad at the top 1%. Be mad at the people who inherited land, wealth and power. Be mad at the people raising your rent and preventing people from building more housing. But don't be mad at some guy just trying to make an honest living doing hard work to stay fed and sheltered because he wasn't born as lucky as you.
Prices have risen by orders of magnitude, untethered to any measurable fundamentals, then crashed, multiple times. I'm not sure what other definition of bubble you're operating with...
That last line isn't true. To be brain-like, it only needs to imitate one thing in the brain. That thing is udually tested in isolation against observed results in human brains. Then, people will combine multiple, brain-inspired components in various ways.
That's standard in computational neuroscience. Our standard should simply be whether they are imitating an actual structure or technique in the brain. They usually mention that one. If they don't, it's probably a nonsense comparison to get more views or funding.
I am genuinely baffled by this reply. Every single sentence you've typed is complete and utter nonsense. I'm going to bookmark this as a great example of the Dunning-Kruger effect in the wild.
Just to illustrate the absurdity of your point: I could claim, using your standard, that a fresh pile of cow dung is brain-like because it imitates the warmth and moistness of a brain.
I'll ignore the insults and rhetoric to give some examples.
The brain-inspired papers have done realistic models of specific neurons, spiking, Hebbian learning, learning rates tied to neuron measurements, matched firing patterns, done temporal synchronization, hippocampus-like memory, and prediction-based synthesis for self-training.
Brain-like or brain-inspired appears to mean using techniques similar to the brain. They study the brain, develop models that match its machinery, implement them, and compare observed outputs of both. That, which is computational neuroscience, deserves to be called brain-like since it duplicates hypothesized, brain techniques with brain-like results.
Others take the principles or behavior of the above, figure out practical designs, and implement them. They have some attributes of the brain-like models or similar behavior but don't duplicate it. They could be called brain-inspired but we need to be careful. Folks could game the label by making things that have nothing to do with brain-like models or went very far away.
I prefer the be specific about what is brain-like or brain-inspired. Otherwise, just mention the technique (eg spiking NN) to let us focus on what's actually being done.
Be specific, provide examples. Much of the things brains do that we call intelligent are totally unknown to us. We have, quite literally, no idea what algorithms the brain employs. If you want to talk about intelligence, I don’t know why you’re talking about neuron spiking. We don’t talk about semiconductor voltages when we talk about computer programs we’re working on.
AI systems are software, so if you want to build something brain like, you need to understand what the brain is actually like. And we don’t.
You can, of course, use the almost equivalent scientific-sounding Latin-derived term ("neuromorphic"), buy popcorn, and come back with it for a discussion about memristors.
This may just be wishful thinking, but is it reasonable to hope that it won't hit the middle class as bad this time around? Seems like most of the people holding the AI bags are the very wealthy, and it doesn't seem like these AI companies are employing a huge number of people.
If you turn on any screen, all you get is AI grifters telling you that whatever your purpose may be, it's surely going to be replaced by AI soon™.
If (when) that doesn't materialize, you can bet there's a visa designed to make sure your chosen career path will have its wages eroded into the ground by the time you're ready to enter the market.
If you even manage to enter the market, that is. That's becoming an increasingly big if, because your country's colleges are enrolling more competition from foreigners than ever, and many of them will be taking advantage of the F-1 visa/STEM opt (now the primary route into the country for 71% of H1-Bs[2]), in order to compete directly against you for entry level jobs.
That's if you even really get to compete. Increasingly, middle management in tech is being dominated by foreigners, many of whom practice extreme ethnic nepotism when hiring. Hope you're prepared to read job listings in rural towns' physical newspapers. Get your tongue ready to lick a lot of stamps, because you'll have to find job postings in rural towns' physical newspapers, which only list a PO Box for you to mail your resume to[3].
[1] https://wol.iza.org/news/tech-giant-accused-of-hiring-bias-i...
https://nypost.com/2022/08/15/tech-giants-confront-ancient-i...
[2] https://www.uscis.gov/sites/default/files/document/reports/o...
[3] https://www.newsweek.com/h1b-job-ads-green-cards-targeted-im...
Icing on the cake: https://x.com/avidandiya/status/1982231594791325903?s=46
https://defencepk.com/forums/threads/frustrated-google-emplo...
Might as well goon?
reply