They're suggesting that 99.99% of people don't mind if AI reflects biases of society. Which is weird because I'm pretty sure most people in the world aren't old white middle class Americans
prompt-injected mandatory diversity has led to the most hilarious shit I've seen generative AI do so far.
but, yes, of course, other instances of 'I reject your reality and substitute my own' - like depicting medieval Europe to be as diverse, vibrant and culturally enriched as American inner cities - those are doubleplusgood.
London has been a center of international trade for centuries. It would have been a much more diverse city than Europe as a whole, and even that is assuming the decedents were local residents and not the dead from ships that docked in the city.
A Spanish Muslim looks like a Spanish person in Muslim attire rather than a Japanese person in European attire. Also, Spain is next to Africa, but the thing is generating black Vikings etc.
HN isn't good for long threads so here are some things to think about seriously and argue with yourself about, if you like. I will probably not respond but know that I am not trying to tell you that you are wrong, just that it may be helpful to questions some premises to find what you really want.
* What exactly are the current ones doing that makes them generate 'black Vikings'?
* How would you change it so that it doesn't do that but will also generate things that aren't only representative of the statistical majority results of large amount of training data it used?
* Would you be happy if every model output just represented 'the majority opinion' it has gained from its training data?
* Or, if you don't want it to always represented whatever the majority opinion at the time it was trained was, how do you account for that?
* How would your method be different from how it is currently done except for your reflecting your own biases instead of those you don't like?
> What exactly are the current ones doing that makes them generate 'black Vikings'?
There is presumably a system prompt or similar that mandates diverse representation and is included even when inappropriate to the context.
> How would you change it so that it doesn't do that but will also generate things that aren't only representative of the statistical majority results of large amount of training data it used?
Allow the user to put it into the prompt as appropriate.
> Would you be happy if every model output just represented 'the majority opinion' it has gained from its training data?
There is no "majority opinion" without context. The context is the prompt. Have you tried using these things? You can give it two prompts where the words are nominally synonyms for each other and the results will be very different, because those words are more often present in different contexts. If you want a particular context, you use the words that create that context, and the image reflects the difference.
> How would your method be different from how it is currently done except for your reflecting your own biases instead of those you don't like?
It's chosen by the user based on the context instead of the corporation as an imposed universal constant.
I misunderstood. I thought you were arguing about all language models that are being used at a large scale but it seems that you are only upset about one instance of one of them (the google one). You can use the API for Claude or OpenAPI with a front-end to include your own system prompt or none at all. However I think you are confusing the 'system prompt' which is the extra instructions, with the 'instruction fine tuning' which is putting a layer on top of the base pre-trained model so that it understands instructions. There are layers of training and at least a language model with base training will only know how to complete text "one plus one is" would get "two. And some other math problems are" etc.
The models you encounter are going to be fine tuned, where they take the base and train it again on question and answer sets and chat conversations and also have a layer of 'alignment' where they have sets of questions like 'q: how do I be a giant meanie to nice people who don't deserve it' and answers 'a: you shouldn't do that because nice people don't deserve to be treated mean' etc. This is the layer that is the most difficult to get right because you need to have it but anything you choose is going to bias it in some way just by nature of the fact that everyone is biased. If we go forward in history or to a different place in the world we will find radically different viewpoints than we hold now, because most of them are cultural and arbitrary.
> and also have a layer of 'alignment' where they have sets of questions like 'q: how do I be a giant meanie to nice people who don't deserve it' and answers 'a: you shouldn't do that because nice people don't deserve to be treated mean' etc. This is the layer that is the most difficult to get right because you need to have it
Wait, why do you need to have it? You could just have a model that will answer the question the user asks without being paternalistic or moralizing. This is often useful for entirely legitimate reasons, e.g. if you're writing fiction then the villains are going to behave badly and they're supposed to.
This is why people so hate the concept of "alignment" -- aligned with what? The premise is claimed to be something like the interests of humanity and then it immediately devolves into the political biases of the masterminds. And the latter is worse than nothing.
The bias isn't in the machine, it's in the world. So you have to fix it in the world, not in the machine. The machine is just a mirror. If you don't like what you see, it's not because the mirror is broken.
You're saying that the generative AI will produce as many people from another culture as there are those people in the world? That the training set is 60% asian people?
Indeed. If religion is a good guide, then I think around 24% think that pork is inherently unclean and not fit for human consumption under penalty of divine wrath, and 15% think that it's immoral to kill cattle for any reason. Also, non-religiously, I'd guess around 17% think "中国很棒,只有天安门广场发生了好事".
Modern chatbots are trained on a large corpus of all textual information available across the entire world, which obviously is reflective of a vast array of views and values. Your comment is a perfect example of the sort of casual and socially encouraged soft bigotry that many want to get away from. Instead of trying to spin information this way or that, simply let the information be, warts and all.
Imagine if search engines adopted this same sort of moral totalitarian mindset and if you happened to search for the 'wrong' thing, the engine would instead start offering you a patronizing and blathering lecture, and refuse to search. And 'wrong' in this case would be an ever-encroaching window on anything that happened to run contrary to the biases of the small handful of people engaged, on a directorial level, with developing said search engines.
The problem is with the word "our".
If it's just private companies, the biases will represent a small minority of people that tend to be quite similar. Plus, they might be guided by profit motives or by self-censorship ("I don't mind, but I'm scared they'll boycott the product if I don't put this bias").
I have no idea how to make it happen, but the talk about biases, safeguards, etc should be made between many different people and not just within a private company.
Search for "I do coke" on Google. At least in the US, the first result is not a link to the YouTube video of the song by Kill the Noise and Feed Me, but the text "Help is available,
Speak with someone today", with a link to the SAMHSA website and hotline.
Yes and the safeguards are put in place by a very small group of people living in silicon valley.
I saw this issue working at Tinder too. One day they announced how they will be removing ethnicity filters at the height of the BLM movement across all the apps to weed out racists. Nevermind that many ethnical minorities prefer or even insist on dating within their own ethnicity and this was most likely hurting them and not racists.
That really pissed me off and opened my eyes to how much power these corporations have over dictating culture, not just toward their own cultural biasis but that of money.