Yes, you can configure the TTS voices (and languages).
If you send /voices you can pick from multiple voices.
Right now it only shows voices in English, but you can send the secret `/setvoice <voiceName>` with an Amazon Polly[0] neural voice and it will work as well.
The only downside right now is that I don't auto-identify languages, so if you set it to Dutch, but ask a question in English, you'll get a response in English with a very Dutch accent haha.
P.S.
And yes @aero-glide2 is correct that you can toggle between Telegram's audio/video inputs by tapping the camera/microphone icon. – Right now MarcBot only supports audio inputs.
It's indeed possible, and you can easily find already existing OSS ChatGPT implementation that can serve as a base. I am more targeting no tech people or just the ones who don't want to bother building it / hosting it.
Just seriously curious, won't you buy it because looks like it was created in a weekend? So maybe not reliable?
I'm in a similar mood, but most of the indie maker's products are created this way and some end up scaling pretty well and turned in good investments, as they are usually cost-effective.
It's not about reliability, it's just that, for things this simple, I'm inclined to write the two Docker Compose lines to deploy this to my Harbormaster server than to pay for it.
I'm very much the minority, as I like to self-host, but it seems to me that an OSS solution would do the same thing,and equally reliably.
Then again, the value proposition in this is that it's hosted and you don't have to deal with OpenAI keys, so that's what users are paying for.
It was a temporary limit by precaution to not reach the limit of the OpenAI API, I will increase it later this week. Also I calculated this limit with the previous davici model in mind that I used when I started building this bot. I later moved to chatgpt-turbo that is cheaper, so there is not reason I cannot increase the limit.
I'm building something similar, but it's for personal use only. That way users can self host their own version and play with it with their own API key.
I just finished something similar in Telegram, I added a way to connect to "live data" via system commands, so the bot can get weather updates or cryptocurrency prices. I want to wire it up to send emails, make reminders, view or update the calendar and see what else makes sense.
If you force the bot to decide to do something (eg: If you feel the user want to start again, respond with --RESTART--) I think you could make AI that feels sentient or can do things on its own.
A curious experiment for now, let's see how it goes:
The privacy policy needs to be cristal clear what they do and what they don't with the chat data. How long is it kept, what is it being used for, who has access to it, etc.
Ideally there are no chat logs being kept at all and chat logs are only enabled temporarily for an individual user when debugging issues.
Lol, I just literally deployed an implementation from a GitHub repo[0] for free on Fly.io hours ago. This way I can also check the code and just pay for what I use. Seems like a low-hanging fruit to leverage on people who are not that into tech that much.
No gist needed really. I found that using the Dockerfile available in the repo to deploy was easy enough.
Basically clone/copy the repo, configure the bot's settings, and then deploy with `fly launch` within the repo folder using Fly.io's CLI. Just make sure your .env file is not in .dockerignore
Thank you for your question and for bringing up your concerns. My micro-enterprise is registered in France, you can find more about me and all the side projects I worked on at sandoche.com, hope it helps and feel free to use to contact button to get support :)
I made something similar too for my own use and opened it to public last week. Try it out maybe: https://t.me/spy16_avabot
Privacy Policy: I DO NOT log/store anything from users chat (except the numeric telegram-id and the language preference). And I DO NOT send any user info to OpenAI as well (I would rather shut it down)
I store last 5 messages in-memory (there's no way for anyone to access that. On restarts, it's gone). I know the consequences of this but it's not that noticeable because ChatGPT itself has some contextual memory. And I'm hosting on single node right now.
Made something similar which also uses Whisper to support voice memos (talk to GPT) and TTS (hear GPT’s responses)
Not sure it warrants a separate post, so sharing it here.
https://t.me/marcbot