> The question is how will AI replace what I have?
I think you answered your own question:
> Sure the AI systems of the future could theoretically do all that, but it’s still the same work/services.
That's it. At the moment, you had to wire together 4 APIs and, as you said, this wouldn't do the job without using (someone else's) AI. So this works as a business/service as long as the AI is not able to handle those 4 APIs by itself (or 3, if the AI is one of them). And it doesn't even have to figure out using these APIs, some company (I'd expect the owner of the AI) can create interfaces.
Not that ChatGPT doesn't know how to use APIs itself. I started a quick experiment, that I didn't have the time to finish yet, making it create a simple GUI app that is connected to a specific SaaS API. Nothing fancy.
At first it would reject my request when I told it to create an app that uses that API saying that it can't create apps. Then I told it to generate code for the main window with the most important details. And it did. Then I told it to create an API wrapper for the API. Then I told it to connect the API wrapper with the gui. Honestly, nothing that it couldn't do if it didn't reject my original request with "I'm sorry, but as an AI language model, I'm not able to generate code for you."
So I don't see how the current state of affairs is too far from me being able to say to a similar AI agent to send a letter to a congress person regarding this and that cause where my opinion is X.
I think that’s all fine and good, the app took me very little time to build. BUT I had to test it.
Integrations are more than code (you have to setup physical addresses in this case, transfer money, etc). I do believe an AI will get there, but all that human evaluation will still need to happen.
I personally assume these AI systems will be used as the middleware for a long time. It’ll be hard to enter other spaces because (1) it won’t be something legally the AI company wants to be liable for and (2) it often requires niche semi-documented items.
Much easier for AI companies to let people take the risks. They can then blame that party.
I think you answered your own question:
> Sure the AI systems of the future could theoretically do all that, but it’s still the same work/services.
That's it. At the moment, you had to wire together 4 APIs and, as you said, this wouldn't do the job without using (someone else's) AI. So this works as a business/service as long as the AI is not able to handle those 4 APIs by itself (or 3, if the AI is one of them). And it doesn't even have to figure out using these APIs, some company (I'd expect the owner of the AI) can create interfaces.
Not that ChatGPT doesn't know how to use APIs itself. I started a quick experiment, that I didn't have the time to finish yet, making it create a simple GUI app that is connected to a specific SaaS API. Nothing fancy.
At first it would reject my request when I told it to create an app that uses that API saying that it can't create apps. Then I told it to generate code for the main window with the most important details. And it did. Then I told it to create an API wrapper for the API. Then I told it to connect the API wrapper with the gui. Honestly, nothing that it couldn't do if it didn't reject my original request with "I'm sorry, but as an AI language model, I'm not able to generate code for you."
So I don't see how the current state of affairs is too far from me being able to say to a similar AI agent to send a letter to a congress person regarding this and that cause where my opinion is X.