Is there a chance we'll get a model without the "aligment" (lobotomization)? There are many examples where answers from Gemini are garbage because of the ideological fine tuning.
We release our non-aligned models (marked as pretrained or PT models across platforms) alongside our fine-tuned checkpoints; for example, here is our pretrained 7B checkpoint for download: https://www.kaggle.com/models/google/gemma/frameworks/keras/...
Alignment is all but a non issue with open weight base model releases, as they can be finetuned to "de align" them if prompt engineering is not enough.
They have released finetuning code too. You can finetune it to remove the alignment finetuning. I believe it would take just a few hours at max and a couple of dollars.
* List of topics that are "controversial" (models tend to evade these)
* List of arguments that are "controversial" (models wont allow you to think differently. For example, models would never say arguments that "encourage" animal cruelty)
* On average, how willing is the model to take a neutral position on a "controversial" topic (sometimes models say something along the lines of "this is on debate", but still lean heavily towards the less controversial position instead of having no position at all. For example, if you ask it what "lolicon" is, it will tell you what it is and tell you that japanese society is moving towards banning it)
I think that's the wrong level to attack the problem; you can do that also with actual humans, but it won't tell you what the human is unable to think, but rather what they just didn't think of given their stimulus — and this difference is easily demonstrated, e.g. with Duncker's candle problem: https://en.wikipedia.org/wiki/Candle_problem
I agree that it’s not a complete solution, but this sort of characterization is still useful towards the goal of identifying regions of fitness within the model.
Maybe you can’t explore the entire forest, but maybe you can clear the area around your campsite sufficiently. Even if there are still bugs in the ground.