I think you're mixing up what topics are. The actual topics as generated by LDA are the concatenated word lists (actually distributions of all words in the corpus, of which i concatenate the top 8 words to generate a meaningful descriptor of the topic). So server-client-http-request-service-ruby-connection-user is one topic / word distribution, in which "ruby" happens to be 6th most probable word, likely because it appears a lot in posts on servers, web services etc. It does not mean ruby the word itself is classified to be server related. Same applies to the other examples you gave.
The categories/domains I simply assigned manually, to show how one could possibly interpret these word distributions that LDA generated.
Not sure what you mean by a new classification approach. There is no classification here, since there are no labeled documents. This is purely unsupervised topic modelling. The topics are mathematical objects. How they are later named or grouped for better human readability is a subjective matter.
The categories/domains I simply assigned manually, to show how one could possibly interpret these word distributions that LDA generated.