To my understanding wikis can take all their data, host it themselves, point the domain to their new hosting, and the move would be entirely invisible to end users if done properly and the quality of the hosting infrastructure wasn't considerably worse.
Observant users might notice the removal of any Weird Gloop branding but otherwise the only way people would know if the wiki itself announces the move or performance of the wiki becomes noticeably worse.
And Weird Gloop won't do what Fandom does and keep a zombie copy of your wiki online. So you won't be competing with Weird Gloop wiki traffic to reclaim your traffic. In fact, the obligations they agree to forbid it.
Upon termination by either party, Weird Gloop is obligated to:
- Cease operating any version of the Minecraft Wiki
- Transfer ownership of the minecraft.wiki domain to the community members
- Provide dumps of Minecraft Wiki databases and image repositories, and any of Weird Gloop's MediaWiki configuration that is specific to Minecraft Wiki
- Assist in transferring to the community members any domain-adjacent assets or accounts that cannot reasonably be acquired without Weird Gloop's cooperation
- This does not include any of Weird Gloop's core MediaWiki code, Cloudflare configuration, or accounts/relationships related to advertising or sponsorships
This sort of agreement means Weird Gloop is incentivized to not become so shit that wiki would want to leave (and take their ad revenue with them) because they've tried to make leaving Weird Gloop as easy as possible.
This is very reassuring. Usually, I assume agreements between different groups will inordinately benefit one party, but this particular agreement sounds like it creates a more level playing field.
And besides, it's not like non-profits are exempt from restructuring and becoming worse. There is no silver bullet.
Crazy seeing a weird gloop post in the morning on HN.
Cook is very passionate about wikis - as is the rest of the team - and the RS wiki has long been regarded as one of the best gaming wikis on the internet; no contest. If you run a wiki - talk to Weird Gloop. The blog isn't bullshit and they genuinely want to help.
I think it's awesome that they're helping more wikis move away from Fandom after the success of the Minecraft wiki moving.
They also are running a wiki for Andrew Gower's upcoming game as well.
I really hope I hear about other wikis making the move in the near future. Fandom deserves to die out.
The RS Wiki is the single website I've whitelisted in my ad blocker. And despite needing ads to cover costs - they made sure to ask the community first about adding them and what alternatives to funding might be possible. It was really a last resort and they are obsessive about making sure the ads are non-intrusive, single banner, not in primary real estate, and not harming the wiki experience. If any ads cause problems they completely pause running ads until the ad host resolves the issue. Although I'm usually signed in - so never see ads anyway as they only show for users who aren't signed in.
There's also a channel on the rs wiki's discord for reporting bad ads, which Cook responds to very quickly (single digit minutes from the interactions I've seen).
My litmus test is to simply lie. It weeds out the people hating AI simply because they know or think it is AI. If you link directly to an AI site they're already going to say they hate it or that it all "looks like AI slop". You won't get anywhere trying to meet them at a middle ground because they simply aren't interested in any kind of a middle ground.
Which is exactly the opposite of what the artists claim to want. But god is it hilarious following the anti-AI artists on Twitter who end up having to apologize for liking an AI-generated artwork pretty much as a daily occurrence. I just grab my popcorn and enjoy the show.
Every passing day the technologies making all of this possible get a little bit better and every single day continues to be the worst it will ever be. They'll point to today's imperfections or flaws as evidence of something being AI-generated and those imperfections will be trained out with fine tuning or LoRA models until there is no longer any way to tell.
E: A lot of them also don't realize that besides text-to-image there is image-to-image for more control over composition as well as ControlNet for controlling poses. More LoRA models than you can imagine for controlling the style. Their imagination is limited to strictly text-to-image prompts with no human input afterwards.
AI is a tool not much different than Photoshop was back when "digital artists aren't real artists" was the argument. And in case anyone has forgotten: "You can't Ctrl+Z real art".
Ask any fractal artists the names they were called for "adjusting a few settings" in Apophysis.
E2:
We need more tests such as this. The vast majority of people can't identify AI nearly as well as they think they can identify AI - even people familiar with AI who "know what to look for".
> Respondents who felt confident about their answers had worse results than those who weren’t so sure
> Survey respondents who believed they answered most questions correctly had worse results than those with doubts. Over 78% of respondents who thought their score is very likely to be high got less than half of the answers right. In comparison, those who were most pessimistic did significantly better, with the majority of them scoring above the average.
Depending from how high it can reliably work from, collaborate with UK CCTV surveillance so that you can better track individuals with fewer cameras as long as you can collate them with cameras that confirm their position at various points in time.
Fly a handful of drones over the area of a fleeing suspect and be able to track their whereabouts and look for suspicious behaviors (eg. someone running and making constant turns in a city or doubling back often, cutting through alleys).
Hell fly a few drones of the city to monitor foot traffic of the population and determine possible points of interest for new developments. Where are people walking to? How do they tend to get there? Can we optimize traffic for them - or more realistically - around them?
Could be used for other forms of crowd analysis too such as how to best disperse a riot and separate a crowd.
Sorry I guess I'm about as pessimistic as you are about it. Use in S&R like throwup238 suggested seems like a good non-militaristic fit for it.
I understand the vast majority of CCTV is private-sector. That doesn't matter when it is handed over to the government with little to no push back when the government asks for the footage.
When you'd only need CCTV to confirm with facial recognition that your suspect was in the area you wouldn't need CCTV coordination - that would be the entire point of deploying a WALDO network. It lowers the number of CCTV you'd need to coordinate with to track someone's movements.
It was disabled yesterday due to the high traffic - but I was able to connect today and after saying hello the chat immediately kicked me off after I asked a question. So unfortunately I've not been able to test it out for more than a few seconds of the "Hello, how can I help you today?"
One thing I've noticed for a lot of these AI video agents, and I've noticed it in Meta's teaser for their virtual agents as well as some other companies, is they seem to love to move their head constantly. It makes them all a bit uncanny and feel like a video game NPC that reacts with a head movement on every utterance. It's less apparent on short 5-10s video clips but the longer the clips the more the constant head movements give it away.
I'm assuming this is, of course, a well known and tough problem to solve and is being worked on. Since swinging too far in the other direction of stiff/little head movements would make it even more uncanny. I'd love to hear what has been done to try and tackle the problem or if at this point it is an accepted "tell" so that one knows when they're speaking with a virtual agent?
Please try it again when you get the chance! We were dealing with high load the past few days and are good to go again! Re: head movements, I totally agree; natural head movements are really important to make it feel more natural. The issue is controllability today, which is something we're working on as well!
Yes. Your type can encode what the proper format for a string should be and if a string is passed that does not meet that format it will throw an error allowing you to make any necessary adjustments to handle the new date year_quarter format.
A super naïve check for using "/" instead of "-" as the separator character for a date formatted as a string. If a date is provided with some other separator character it will throw an error. If my function takes a DateString the string must be formatted correctly to pass the type check. Obviously this isn't enough (YYYY/MM/DD is different than DD/MM/YYYY) but the intention was to show a way to enforce something via types rather than validating a string to check that your have a DateString you can simply enforce that you have one.
Not only do I work less in the office - the quality of the work I produce is lower. I'm more stressed out about things that I no longer have the time for because I'm wasting time commuting to work when I could be taking care of chores and errands or myself.
If I ever have to waste 5 more minutes over water cooler chat about what someone did the last weekend I'm putting in my 2 weeks notice. I don't go to work to socialize and as far as I can tell that is the real reason people want to RTO. They quite literally don't know how to socialize outside of work and bugging their coworkers so want everyone to RTO so they have people to chat with who have no choice but to pretend to be listening and be courteous with them.
Ever seen someone try and search something on Google and they are just AWFUL at it? They can never find what they're looking for and then you try and can pull it up in a single search? That's what it is like watching some people try to use LLM's. Learning how to prompt an LLM is as much a learned skill as much as learning how to phrase internet searches is a learned skill. And as much as people decried that "searching Google isn't a real skill" tech-savvy people knew better.
Same thing except now it's also many tech-savvy people joining in with the tech-unsavvy in saying that prompting isn't a real skill...but people who know better know that it is.
On average, people are awfully bad at describing exactly what it is they want. Ever speak with a client? And you have to go back and forward for a few hours to finally figure out what it is they wanted? In that scenario you're the LLM. Except the LLM won't keep asking probing questions and clarifications - it will simply give them what they originally asked for (which isn't what they want). Then they think the LLM is stupid and stop trying to ask it for things.
Utilizing an LLM to its full potential is a lot of iterative work and, at least for the time being, requires having some understanding of how it works underneath the hood (eg. would you get better results by starting a new session or asking it to forget previous, poorly worded instructions?).
I'm not arguing that you can't get result with LLMs, I'm just asking is it worth the actual effort especially when there's better way to get that result you're seeking (or if the result is really something that you want).
An LLM is a word (token?) generator which can be amazingly consistent according to its model. But rarely is my end goal to generate text. It's either to do something, to understand something, or to communicate. For the first, there are guides (books, manuals, ...), for the second, there are explanations (again books, manuals,...), and the third is just using language to communicate what's on my mind.
That's the same thing with search engines. I use them to look for something. What I need first is a description of that something, not how to do the "looking for". Then once you know what you want to find, it's easier to use the tool to find it.
If your end goal can be achieved with LLMs, be my guest to use them. But, I'm wary of people taking them at face value and then pushing the workload unto everyone else (like developers using electron).
It's hard to quantify how much time learning how to search saves because the difference can range between infinite (finding the result vs not finding it at all) to basically no difference (1st result vs 2nd result). I think many people agree it is worth learning how to "properly search" though. You spend much less time searching and you get the results you're looking for much more often. This applies outside of just Google search: learning how to find and lookup information is a useful skill in and of itself.
ChatGPT has helped me write some scripts for things that otherwise probably would have taken me at least 30+ minutes and it wrote them in <10 seconds and they worked flawlessly. I've also had times where I worked with it to develop something that ended up taking me 45 minutes to only ever get error-ridden code that I had to fix the obvious errors and rewrite parts of it to get it working. Sometimes during this process it actually has taught me a new approach to doing something. If I had started from scratch coding it by myself it probably would have taken me only 10~ minutes. But if I was better at prompting what if that 45 minutes was <10 minutes? It would go from from a time loss to a time save and be worth using. So improving my ability to prompt is worthwhile as long as doing so trends towards me spending less time prompting.
Which is thankfully pretty easy to track and test. On average, as I get better at prompting, do I need to spend more or less time prompting to get the results I am looking for? The answer to that is largely that I spend less time and get better results. The models constantly changing and improving over time can make this messy - is it the model getting better or is it my prompting? But I don't think models change significantly enough to rule out that I spend less time prompting than I have in the past.
>>> you do need to break down the problem into smaller chunks so GPT can reason in steps
To search well, you need good intuition for how to select the right search terms.
To LLM well, you can ask the LLM to break the problem into smaller chunks, and then have the LLM solve each chunk, and then have the LLM check its work for errors and inconsistencies.
And then you can have the LLM write you a program to orchestrate all of those steps.
Yes you can. What was the name of the agent that was going to replace all developers? Devin or something? It was shown it took more time iterate over a problem and created terrible solutions.
LLMs are in the evolutionary phase, IMHO. I doubt we're going to see revolutionary improvements from GPTs. So I say time and time again: the technology is here, show it doing all the marvelous things today. (btw, this is not directed at your comment in particular and I digressed a bit, sorry).
If prompting ability varies then this is not some objective question, it depends on each person.
For me I've found more or less every interaction with an LLM to be useful. The only reason I'm not using it continually for 8 hours a day is because my brain is not able to usefully manage that torrent of new information and I need downtime.
English as input language works in simple scenarios but breaks down very very quickly. I have to get extremely specific and deliberate. At some point I have to write pseudocode to get the machine to get say double checked locking right. Because I have enough experiences where varying the prompting didn't work, I revert to just writing the code when I see the generator struggling.
When I encounter somebody who says they do not write code anymore, I assume that they either:
1. Just don't do anything beyond the simplest tutorial-level stuff
2. or don't consider their post-generation edits as writing code
3. or are just bullshitting
I don't know which it is for each person in question, but I don't trust that their story would work for me. I don't believe they have some secret sauce prompting that works for scenarios where I've tried to make it work but couldn't. Sure I may have missed some ways, but my map of what works and what doesn't may be very blurry at the border, but the surprises tend to be on the "doesn't work" side. And no Claude doesn't change this.
I definitely still write code. But I also prefer to break down problems into chunks which are small enough that an LLM could probably do them natively, if only you can convince it to use the real API instead of inventing a new API each time — concrete example from ChatGPT-3.5, I tried getting it to make and then use a Vector2D class — in one place it had sub(), mul() etc., the other place it had subtract(), multiply() etc.
It can write unit tests, but makes similar mistakes, so I have to rewrite them… but it nevertheless still makes it easier to write those tests.
It writes good first-drafts for documentation, too. I have to change it, delete some stuff that's excess verbiage, but it's better than the default of "nobody has time for documentation".
Exactly! What is this job that you can get where you don't code and just copy-paste from ChatGPT? I want it!
My experience is just as you describe it: I ask a question whose answer is in stackoverflow or fucking geeks4geeks? Then it produces a good answer. Anything more is an exercise in frustration as it tries to sneak nonsense code past me with the same confident spiel with which it produces correct code.
"It's a translator. But they seem to be good/bad/weird/delusional in natural translations. I have a"
(Google translate stopped suddenly, there).
I've tried using ChatGPT to translate two Wikipedia pages from German to English, as it can keep citations and formatting correct when it does so; it was fine for the first 2/3rds, then it made up mostly-plausible statements that were not translated from the original for the rest. (Which I spotted and fixed before saving, because I was expecting some failure).
Don't get me wrong, I find them impressive, but I think the problem here is the Peter Principle: the models are often being promoted beyond their competence. People listen to that promotion and expect them to do far more than they actually can, and are therefore naturally disappointed by the reality.
People like me who remember being thrilled to receive a text adventure casette tape for the Commodore 64 as a birthday or christmas gift when we were kids…
…compared to that, even the Davinci model (that really was autocomplete) was borderline miraculous, and ChatGPT-3.5 was basically the TNG-era Star Trek computer.
But anyone who reads me saying that last part without considering my context, will likely imagine I mean more capabilities than I actually mean.
> On average, people are awfully bad at describing exactly what it is they want. Ever speak with a client? And you have to go back and forward for a few hours to finally figure out what it is they wanted?
One of them it was the entire duration of me working for them.
They didn't understand why it was taking so long despite constantly changing what they asked for.
Building the software is usually like 10% of the actual job, we could do a better job of teaching that.
The other 90% is mostly mushy human stuff, fleshing out the problem, setting expectations etc. Helping a group of people reach a solution everyone is happy with has little to do with technology.
Mostly agree. Until ChatGPT, I'd have agreed with all of that.
> Helping a group of people reach a solution everyone is happy with has little to do with technology.
This one specific thing, is actually something that ChatGPT can help with.
It's not as good as the best human, or even a middling human with 5 year's business experience, but rather it's useful because it's good enough at so many different domains that it can be used to clarify thoughts and explain the boundaries of the possible — Google Translate for business jargon, though like Google Translate it is also still often wrong — the ultimate "jack of all trades, master of none".
We're currently in the shiny toy stage, once the flaws are thoroughly explored and accepted by all as fundamental I suspect interest will fade rapidly.
There's no substance to be found, no added information; it's just repeating what came before, badly, which is exactly the kind of software that would be better off not written if you ask me.
The plan to rebuild society on top of this crap is right up there with basing our economy on manipulating people into buying shit they don't need and won't last so they have to keep buying more. Because money.
The worry I have is that the net value will become great enough that we’ll simply ignore the flaws, and probabilistic good-enough tools will become the new normal. Consider how many ads the average person wades through to scroll an Insta feed for hours - “we’ve” accepted a degraded experience in order to access some new technology that benefits us in some way.
To paraphrase comedian Mark Normand: “Capitalism!”
To extent I agree, I think that's true for all tech since the plough, fire, axles.
But I would otherwise say that most (though not all*) AI researchers seem to be deeply concerned about the set of all potential negative consequences, including mutually incompatible outcomes where we don't know which one we're even heading towards yet.
* And not just Yann LeCun — though, given his position, it would still be pretty bad even if it was just him dismissing the possibility of anything going wrong
> That's what it is like watching some people try to use LLM's.
Exactly. I made a game testing prompting skills a few days earlier, to share with some close friends, and it was your comment that inspired me to translate the game into English and submitted to HN. ( https://news.ycombinator.com/item?id=41545541 )
I am really curious about how other people write prompts, so while my submission only got 7 points, I'm happy that I can see hundreds of people's own ways to write prompts thanks to HN.
However, after reading most prompts (I may missed some), I found exactly 0 prompts containing any kind of common prompting techniques, such as "think step by step", explaining specific steps to solve the problem instead of only asking for final results, few-shots (showing example inputs and outputs). Half of the prompts are simply asking AI to do the thing (at least asking correctly). The other half do not make sense, even if we show the prompt to a real human, they won't know what to reply with.
Well... I expected that SOME complaints about AI online are from people not familiar with prompting / not good at prompting. But now I realized there are a lot more people than I thought not knowing some basic prompting techniques.
Anyway, a fun experience for me! Since it was your comment made me want to do this, I just want to share it with you.
Could you reference any youtube videos, blog posts, etc of people you would personally consider to be _really good_ at prompting? Curious what this looks like.
While I can compare good journalists to extremely great and intuitive journalists, I don't have really any references for this in the prompting realm (except for when the Dall-e Cookbook was circulating around).
Sorry for the late response - but I can't. I don't really follow content creators at a level where I can recall names or even what they are working on. If you browse AI-dominated spaces you'll eventually find people who include AI as part of their workflows and have gotten quite proficient at prompting them to get the results they desire very consistently. Most AI stuff enters into my realm of knowledge via AI Twitter, /r/singularity, /r/stablediffusion, and Github's trending tab. I don't go particularly out of my way to find it otherwise.
/r/stablediffusion used to (less so now) have a lot of workflow posts where people would share how they prompt and adjust the knobs/dials of certain settings and models to make what they make. It's not so different from knowing which knobs/dials to adjust in Apophysis to create interesting fractals and renders. They know what the knobs/dials adjust for their AI tools and so are quite proficient at creating amazing things using them.
People who write "jailbreak" prompts are a kind of example. There is some effort put into preventing people from prompting the models and removing the safeguards - and yet there are always people capable of prompting the model into removing its safeguards. It can be surprisingly difficult to do yourself for recent models and the jailbreak prompts themselves are becoming more complex each time.
For art in particular - knowing a wide range of artist names, names of various styles, how certain mediums will look, as well as mix & matching with various weights for the tokens can get you very interesting results. A site like https://generrated.com/ can be good for that as it gives you a quick baseline of how including certain names will change the style of what you generate. If you're trying to hit a certain aesthetic style it can really help. But even that is a tiny drop in a tiny bucket of what is possible. Sometimes it is less about writing an overly detailed prompt but rather knowing the exact keywords to get the style you're aiming for. Being knowledgeable about art history and famous artists throughout the years will help tremendously over someone with little knowledge. If you can't tell a Picasso from a Monet painting you're going to find generating paintings in a specific style much harder than an art buff.