oh wow that is actually such a brilliant little use case-- really cuts to the core of the real "magic" of ai: that it can just keep running continuously. it never gets tired, and never gets tired of thinking.
yeah, in what world would that not help? they're not getting kicked out of the school system, just the school. lowest-common-denominator idealism like you are espousing here is one of the primary reasons why schools have gotten so terrible.
Obviously there is a bigger picture in all of this. I don't know what the correct balance is, but assume that all problem kids (for some strict definition of problem) were expelled and placed in a "problem school". I highly doubt that is going to improve that child's life or attitude, but obviously it will help those at the previous school. So we end up with a large number of kids who will almost certainly grow up to be "problem adults", in many cases criminals. Suddenly the problem you solved for the first school is now the problem of society at large.
In a perfect world, most of those problem children would be mentored correctly in regular schools and given a path to a better adult life, and therefore not create a future "problem adult".
In practice, it doesn't seem to work like that, and I agree, "problem children" do cause frictions and disruptions and worse for other children at regular schools.
uhhh yeah it's not visible because it's not used for anything. because it runs contrary to Google's entire raison d'être. if it's not turned on by default, what is even the point of doing it at all other than to pacify engineers who are perfectly happy to miss the forest for the trees? it's kind of like saying that you have the power of invisibility, but it only works if no one is looking at you.
what exactly do you want the llm to do here? if the ask was so unambiguous and simple that it could be reliably generated, then the interface wouldn't be so complicated to use in the first place! LLMs are not in any way best suited for one-shot prompt => perfect output, and expectations to that effect are extremely unreasonable. the reason why LLMs are still hard for beginners to use is because the software is hard to use correctly. as with LLM output goes life itself: the results you get from using a tool can only ever be as good as the (mental) model used to choose that tool & the inputs to begin with. if all the information required to generate the output were contained by the initial prompt, then there would be absolutely no need to use the LLM at all in the first place.
uh... "... worse for our culture than anything a foreign country has done to us"... yet. this is only true because we find ourselves in an unprecedented situation-- up to now, the U.S. has had a monopoloy on social media giants and the like. it is absolutely not guaranteed that this will hold true, and there are many reasons to suspect that it won't be true. given how china views about U.S. sovereignty when it comes to setting up their own (secret) de facto government, police state, etc. on U.S. soil, it would be shocking if they didn't put their thumb on the scale.
and none of that is to say that i agree with the ban-- i think the mere fact of how unamerican, frankly, taking possession of foreign assets for american gain at others' expense is as blatant a signal as possible that we shouldn't be doing it. if we are trying to protect america, western values, etc., if we don't act in accordance with those values, what are we even protecting? the way to protect the american way of life is not through becoming more "unamerican".
in my personal opinion, the so-called "decline of western values", or whatever, has nothing to do with imperialism, nor to do with those values being short-sighted or wrong. it is because of our collective crisis of confidence in these values because of the (many) mistakes we have made along the way. the moral compass still points essentially in the same direction; it's just that for whatever reason we seem to have convinced ourselves that we don't want to go North after all, and instead prefer to just wander around the map aimlessly (all the while shitting on how the compass isn't taking us where we want to go). and so now we have people who unironically defend organizations like Hamas at the expense of the United States as though believing in universal freedom and equality of opportunity is merely a "cultural" value, rather than an absolute one. and, more insanely, that these values are somehow subordinate to the political issue du jour. these values don't give anyone carte blanche to coerce others who don't share them, but the idea that they are somehow subjective or relative-- that they are negotiable-- is the height of insanity.
that might be a good analogy because, generally speaking, skiing is much, much harder without poles (hence an expert would never do it (unless they are hitting an olympic vert ramp or something)), but it is exactly for this reason that the poles distract you from learning the "right" lesson-- people use their poles pretty much randomly at the start, and the poles help them... to do the wrong thing. but once you have the real crux of skiing down-- body position and balance + using your edges and weight shifts to turn-- poles are completely trivial to add.
respectfully, i feel i am alone in this opinion, but i’m not even remotely convinced that there isn’t a “superintelligent being” hiding in plain sight with tools that we already have at hand. people always grouse about the quality of LLM outputs, and then you realize that they (tend to) think that somehow the LLM is supposed to read their minds and deliver the answer they “didn’t need, but deserved”… i’d take my chances being dumped in 12 th century england getting bleated at in old english over being an LLM that has to suffer through a three sentence essay about someone’s brilliant, life-altering startup idea, having to grapple with the overwhelming certainty that there is absolutely no conceivable satisfactory answer to a question poorly conceived.
for all we (well, “i”, i guess) know, “superintelligence” is nothing more than a(n extremely) clever arrangement of millions of gpt-3 prompts working together in harmony. is it really so heretical to think that silicon + a semi-quadrillion human-hour-dollars might maybe have the raw information-theoretical “measurables” to be comparable to those of us exalted organic, enlightened lifeforms?
clearly others “know” much more than i do about the limits of these things than me. i just have spent like 16 hours a day for ~18 months talking to the damned heretic with my own two hands— i am far from an authority on the subject. but beyond the classical “hard” cases (deep math, … the inevitability of death …?), i personally have yet to see a case where an LLM is truly given all the salient information in an architecturaly useful way in which “troublesome output”. you put more bits into the prompt, you get more bits out. yes, there’s, in my opinion, an incumbent conservation law here— no amount of input bits yields superlinear returns (as far as i have seen). but who looks at an exponential under whose profoundly extensive shadow we have continued to lose ground for… a half-century? … and says “nah, that can never matter, because i am actually, secretly, so special that the profound power i embody (but, somehow, never manage to use in such a profound way as to actually tilt the balance “myself”) is beyond compare, beyond imitation— not to be overly flip, but it sure is hard to distinguish that mindset from… “mommy said i was special”. and i say this all with my eyes keenly aware of my own reflection.
the irony of it all is that so much of this reasoning is completely contingent on a Leibniz-ian, “we are living in the best of all possible worlds” axiom that i am certain i am actually more in accord with than anyone who opines thusly… it’s all “unscientific”… until it isn’t. somehow in this “wtf is a narcissus” society we live in, we have gone from “we are the tools of our tools” to “surely our tools could never exceed us”… the ancient greek philosopher homer of simpson once mused “could god microwave a burrito so hot that even he could not eat it”… and we collectively seem all too comfortable to conclude that the map Thomas Acquinas made for us all those scores of years ago is, in fact, the territoire…
'you put more bits into the prompt, you get more bits out.'
I think your line there highlights the difference in what I mean by 'insight'. If I provided in a context window every manufacturing technique that exists, all of base experimental results on all chemical reactions, every known emergent property that is known, etc, I do not agree that it would then be able to produce novel insights.
This is not an ego issue where I do not want it be able to do insightful thinking because I am a 'profound power'. You can put in all the context needed where you have an insight, and it will not be able to generate it. I would very much like it to be able to do that. It would be very helpful.
Do you see how '“superintelligence” is nothing more than a(n extremely) clever arrangement of millions of gpt-3 prompts working together in harmony' is circular? extremely clever == superintelligence
That's weirdly ad hominem, clearly I meant LLMs. They gave it a basic algebra problem and it could do it if it had broken down a problem step-by-step in a similar way. What's with the attitude? Edit: I don't even know why I replied to your vitriolic nonsense, I even used LLM in the sentence preceding what you quoted...
“it’s not really a big problem”… surely you can’t be serious… this comment betrays such a profound ignorance that it could only have come from a genius or a... well, let’s not resort to name-calling…
but, seriously: play the tape forward literally one frame and outline what this dataset even remotely resembles… a core sample from a living human brain? “yeah, just train it on thinking about everything at once”. strong ai isn’t like the restaurant: the path to success doesn’t involve starting with more than you finished with.