The impression I got from the article was that the warm keep-alive connections were encrypted - the SSL handshake takes place ahead of time and then tunnels multiple requests from multiple users - hence the lower latency.
Amazon's ELB (the EC2 load balancer) used to send HTTPS traffic to your back-end unencrypted, but I believe they have since fixed this.
Not sure what you mean by your ELB/HTTPS comment. ELB can be used as an HTTPS terminator. It will then proxy traffic to your backend as HTTP. It can also be used as a straight TCP proxy, but in that case it's just shoving along the HTTPS request to an HTTPS terminator that you maintain.
Amazon's ELB (the EC2 load balancer) used to send HTTPS traffic to your back-end unencrypted, but I believe they have since fixed this.