Are the suggestions liable to be in any random language?
Also, it looks like your recommended workflow comments on all issues. But not all issues are fixable problems. How does this bot respond to non-bugs – things like suggestions, questions, and the like? It'd be cool to have more screenshots depicting this.
ChatGPT is also quite easy to trick into spouting abusive responses. Have you red-teamed getting Mendable to output mean stuff?
Hi, Nick here from SideGuide. Thanks for your questions and suggestions.
1. As of right now it seems like 90% of the responses can infer the language pretty well from the issue context. Although we are working hard to improve this.
2. We use the issue's labels to identify the issue type and choose an appropriate prompt depending on what type it is. Thanks for the suggestion, will add more screenshots about that asap.
3. We're currently only using GPT-3 (davinci-003) on production. But we are currently testing ChatGPT too, but we don't think it is ready yet due to it is down time/rate limiting and lack of fine-tuning capabilities. Because this is still an early MVP, we haven't gotten around to filtering abusive responses just yet.
Are the suggestions liable to be in any random language?
Also, it looks like your recommended workflow comments on all issues. But not all issues are fixable problems. How does this bot respond to non-bugs – things like suggestions, questions, and the like? It'd be cool to have more screenshots depicting this.
ChatGPT is also quite easy to trick into spouting abusive responses. Have you red-teamed getting Mendable to output mean stuff?