The one distinction that I think is worth making is that OpenAPI describes HTTP APIs using the semantics of HTTP. Corba and SOAP attempted to be protocol independent. That's much harder to do well.
Another good aspect of OpenAPI is that most people who are using it, are starting towards using the definition as the primary design artifact and then letting tooling work on that. Almost no-one designed WSDL or IDL by hand. Focusing on an OpenAPI as a design contract helps developers produce solutions that are more independent of tooling. And that's a good thing.
There's nothing wrong with just "tunneling" SOAP or anything else over HTTP POST. Tons of APIs do this and work just fine.
Now maybe it's easier on clients when you can just curl -XDELETE something but I'm not sure it's that big of a difference in the end. Especially if you have auto-gen'd client code.
The problem is that you abandon a bunch of existing architecture and middleware: an extremely rich caching API, content negotiation, proxy info, authentication.
I can trivially set an HTTP load balancer and log status codes. Can't say the same for SOAP.
It makes it hard to take advantage of HTTP caches. There is also some redundancy because SOAP has headers and HTTP has headers. Which to use?
It also is handy to be able to make idempotent requests, especially over flakey networks.
That might be true, and happens in lots of areas, but Swagger is nothing like Corba/SOAP specs and JSON is nothing like SOAP.
JSON gives most of the stuff SOAP was actually used for (except for bureaucratic spec-driven edge cases), for 20% of the complexity -- so the JSON generation did something right.
(I'm old enough to have been through CORBA and SOAP).
Your post appears to characterize any such IDL as a "mistake". What makes attempts to formalize an API in this manner inherently problematic? Or did you just mean that it's a mistake to do a different IDL for REST-specific APIs instead of using an extant one? Does your complaint extend to non-web IDLs like Thrift or protobufs?
In my experience, there is nothing inherently wrong with this type of IDL. Like all architectural decisions, it comes with its own tradeoffs, but there's no reason the tradeoff profile is inherently wrong.
CORBA may have failed, but not so much because of its design. Today we use HTTP as an RPC protocol, and RPC is what CORBA was. And today we have things like gRPC (Google's protocol, based on HTTP/2 and Protocol Buffers) and Thrift (mainly driven by Facebook, I think). Microsoft's COM, of course, is very similar to CORBA, and hugely successful.
CORBA is of course more complex, in ways that are less useful today. One particular feature was that the server always returned "live" objects that transparently proxied the calls back to the server. So you do something like getUser(123).delete(), and it would cause the User object's delete method to be called remotely. It turns out this generates a rather tight coupling between client and server; in particular, the client and server both have to use reference counting to keep objects alive as long as they are in use by a client. Things tend to get out of hand that way. While it is certainly magical to use a remote server exactly like a local one (locality transparency), it's also a performance trap.
But of course Swagger/OpenAPI has nothing to do with this.
I think what your more talking about is forward/backward compatibility features.
Much of that was independent of CORBA, you just needed to release different versions of the API, this is identical in SOAP and REST today, and many client libraries are generated from specs.
http://www.omg.org/spec/CORBA/ https://www.w3.org/TR/2007/REC-soap12-part0-20070427/