As a former Borland employee (not for very long, 2006-2009, and actually working in their Austrian offices, which originally became part of the company through the Segue acquisition), I certainly didn't have the complete picture, but our managers, even going as far as those located in Cupertino, and later Austin, were quite open about what Borland's issue was: even with the rise of Java, they thought they could continue in the IDE business like they did before. Apparently, JBuiler used to be their #1 cash cow. Well, until Eclipse came along: within 18 months, JBuilder license sales dropped to essentially zero. They eventually realized that the whole IDE market was dead, so that was spun off into CodeGear, JBuilder was relaunched as an Eclipse distribution with "premium" extensions, and CodeGear was ultimately sold to Embarcadero, while the rest of the company was trying to refocus on producing software for other parts of the development lifecycle, from requirements management to automated testing and test management (hence Segue) to SCM. Company performance was atrocious, and that really screwed over people that took part in the employee share program, but they somehow managed to make it over to get acquired by Micro Focus.
I'm not saying the market is still dead, but for a company like Borland, 10 years ago there was nothing they could have done to recover from Eclipse taking over the market. Even then, IntelliJ was a relative niche product (compared to the vast ubiquity of Eclipse), and JetBrains did diversify their product portfolio to both more aspects of the development lifecycle and to more specialized needs in IDEs. Anyway, my point was that Eclipse was a game-changer that triggered great change in Borland.
I regard Eclipse as a tool of the Devil. I'll be taking up Android again soon. Every time I try I find Eclipse giving me the goatse but now I'm going to use TextWrangler and Ant.
> I'll be taking up Android again soon. Every time I try I find Eclipse [...]
Is there a particular reason to choose Eclipse instead of Android Studio? Google is pushing its weight on the latter with their Eclipse+ADT combo being essentially on life support (and for good reason since Android Studio provides a vastly better experience).
Moreover, the next version of Android Studio will include C/C++ support based on JetBrains' CLion that will cover development with the NDK.
I didn't know about Android Studio, I'll give it a try.
I actually like IDEs quite a lot, what I don't like are IDEs like netbeans, eclipse, visual studio or xcode.
I loved codewarrior absolutely to death, I was heavily into thinkc, the first development tool I ever bought with my own money was lightspeedc which fit on a floppy.
Metrowerks Codewarrior has fond memories for me too (although I have never had a problem with Eclipse aside from bloated J2EE bundles like WSAD etc. It's easy to set it up to be quick and unobstrusive I've found).
Well you're lucky that Eclipse has essentially been deprecated as an Android IDE.
Also Ant? Gradle and Maven are significantly more versatile and better supported for Android development (with Gradle being a first-class citizen now) and I honestly don't advise you to torture yourself with Ant.
A coworker introduced me to PyCharm after years of development in vim, and it has been an incredible productivity boost. It's painful to watch coworkers slowly fiddle around with text editors. I almost don't like telling people this because it's such a professional competitive advantage for me after switching. Even my coworkers using Sublime run into pain points that I don't (perhaps because they aren't installing all the plugins they should?)
* Its uncanny ability to find references and declarations in both Python and JavaScript, even when the references are dynamic. What is this .log() or .render() actually calling? Instantly I'm looking inside the relevant function, even if there are hundreds of functions with the same name. This works across imports and libraries. Doesn't always work, but even when it doesn't, it usually makes good guesses and lets me select from a list. If you're working on a large, unfamiliar codebase, this is a godsend. You will sometimes wonder why your coworkers are struggling to understand code references and then you realize they don't have this power-tool available to them.
* Local file history separate from git history, with fantastic navigation/diffing support. It's an undo/redo button on steroids that keeps months of history.
* When searching for something, you get a full, scrollable preview of the surrounding context of each matching result. Results are organized within the file hiercharchy and are collapsible.
* Remote debugging. Drop an egg in your remote source and go.
* Annotated code, line by line, with the relevant commit and comment next to it.
* Auto-linting to catch common mistakes when coding. It'll immediately underline suspicious-looking code and tell you why. Lots of customizable style-linters to keep your code pristine looking.
I've been a WebStorm fanboy since I purchased a license during a sale two years ago. The biggest selling point for WebStorm for me is its smart searching abilities. "Find In Files" doesn't just do text search, it's also smart enough to allow you to include or exclude matches in strings and comments. It also has a great parser that understands the structure of your code well enough to do legitimate "Find References" searches with really good results in spite of Javascript's dynamicness.
Obviously Python is a very dynamic language as well, so PyCharm has most of the same strong points. I'd been using the Community edition for a while, but finally picked up a Commercial license a couple weeks ago. In this case, the feature that finally convinced me to pay for it was no-kidding Remote Debugging. Drop a .egg next to your deployed app and stick an extra import and connection call into your code, and then PyCharm's debugger (which is another great feature in and of itself) just flat-out works over SSH.
Ultimately, both these tools have saved me enough time and effort to pay for themselves repeatedly.
While WebStorm and other JetBrain IDEs are great, the RAD/WYSIWYG part is missing. We had already Frontage/Dreamweaver for HTML3-4. If JetBrains would add a HTML5 WYSIWYG editor based on webkit/blink to their IDEs, it would be awesome.
I don't think so. They have an excellent IDE for editing html on a source basis and an HTML WYSIWYG wouldn't have a developer as a main target, most developers would prefer editing the html directly (and would need to, unless they used a templating system based on the ideas of Enlive).
These days the target market for WYSIWYG HTML editors would be probably be using an online CMS like WP - which actually has a WYSIWYG HTML editor. Failing that, Microsoft Word actually had nice HTML output last time I checked.
> HTML WYSIWYG wouldn't have a developer as a main target, most developers would prefer editing the html directly
WYSIWYG always had a split-screen view which allows one to work in design and code mode at the same time (and see the changes in the other mode). It's very efficient.
> Microsoft Word actually had nice HTML output
MS Word 2000-2013 web page export is based on the old Frontpage engine, and generates invalid HTML4 and converts vector graphics and word art to VML, the proprietary predecessor of SVG, that IE10+ don't even support anymore (except with legacy quirks mode mode).
> online CMS like WP
The contentEditiable HTML4 API that CMS use is sadly completely broken in every browser in a different way - the experience and the code output quality is far worse than Frontpage ever was - and Dreamweaver is better in every aspect.
There is a reason why we have BBCode and Markdown, because contentEditiable isn't that great at the moment. And as several browser vendors have a competitive advantage with their web based Office services (Office365 Web Word, iCloud.com Pages, Google Docs), they will never fix the various bugs of contentEditiable in IE/Safari/Chrome.
The current year is 2015, MS word has come a long way since 2001. In fact they bragged with clean HTML output, suitable for inserting into your blog, in MS Word 2007. If you haven't worked with Word for almost 15 years do yourself a service and don't assume it looks much the same anymore - most the complaints I hear about it are no longer so much an issue in the newer versions.
Also I have no idea where you got the idea that WYSIWYG always had a split-screen view unless there is some product that is named WYSIYG, for me it has always meant What You See Is What You Get and the entire idea was that you have to mess with how it worked under the hood. As implemented in e.g word (or open office) you don't even have the option of doing that. Are you arguing that Word is not WYSIWYG?
Finally take a look at the the output from the WP HTML editor, I have no idea why you bring up the contentEditable API, but either you are misinformed or wordpress use something else because the output is pretty clean and it can convert between that and free HTML entry.
You are wrong, please test it again. In Word 2010 it still uses the Frontpage engine to output not so good HTML4 code with VML graphics. Btw. Office applications haven't changed that much since Office 97 (beside regular interface changes).
And just because some have never have seen a good HTML WYSIWYG editor doesn't mean it's a bad rapid development method and should not be integrated in a modern IDE.
- contains VML code
- will not render correctly in any modern browser
- If you want to be able to render it in IE11, you have to press F12 (Developer tools)
and change from "Edge" to compatibility to version 7 (or even 5) of the IE engine.
Then it will be drawn "correctly".
To clarify the generated code is invalid HTML 4.01 (20 errors according to W3C HTML4 validator https://validator.w3.org/check ) and the first view lines look like this:
Microsoft split Frontpage in two products SharePoint Designer and Expression Web (the later with a new web render engine), though both products are either a shadow of the former self (no WYSIWYG view anymore) or abandoned (free download without support): https://en.wikipedia.org/wiki/Microsoft_Expression_Web
I think that would not happen, with all the difference and mess in implementation under multiple browsers (even with webkit based Safari and Chrome), and file watcher+auto reload is less of a headache.
WYSIWYG always had a split-screen view which allows one to work in design and code mode at the same time (and see the changes in the other mode).
It's very efficient and less headache. WebStorm could add webkit and sync the code view like Dreamweaver. JetBrain's IDE is Java based and integrating a C++ based WYSIWYG view might be more complicated. And all the Java based browsers have been discontinued like https://en.wikipedia.org/wiki/HotJava .
Not PyCharm user, but IntelliJ IDEA, PhpStorm and WebStorm - they are from one family.
Most important things are:
1) Static errors analyzing. Saves a lot of time. Really, hundreds hours of time.
2) Code completion (VS users call it "IntelliSense").
3) Refactoring tools. You can change name of method in one place and IDE will change all usages of this method everywhere in code. It's extremely important.
I'll second all those points, especially the first one, but I'll add a word of caution. All of the points you mentioned require the IDE to parse all the code, not just the code you are writing at this moment. If it doesn't know the type of $foo, it can't tell what $foo->getModel() calls and therefore it can't rename that getModel(), for example. This means you have to be careful in projects using older frameworks or CMS. From recent experience, a great example of such an atrocious codebase is Magento:
- their method calls are excessively chained, which means a failure to analyse step 2 of the chain blocks analysis of all further chained calls
- their phpDoc either doesn't exist or is actively misleading at times
- they rely on magic methods all throughout the codebase without properly annotating them
I mostly develop in Sublime/Atom + iTerm2, but after working in RubyMine for several months, it feels like I lost some sensation in a limb. (and I only explored the bare essentials) Biggest wins for me: Cmd-click pretty much anything and jump to the file, even if it's a gem somewhere outside of my app. Code hinting. The way a Rails console behaves.
For me the features I appreciate the most are the debugger, framework support, and decent multi-language syntax support (for when there's js embedded in the html). All that and it runs a lot faster than PyDev.
I was so happy when I found out that you can select a string and then inject a new language right there which gets you code completion for that language. I was even more happy when it allowed me to edit that string in a separate window as if existed in a separate file.
Ruby web dev here...I pretty much use liberal puts statements (actually the awesome_print) gem. It's never really been a problem for me with ruby. 2001-2007 I was primarily a Java developer and the debugger was essential. But since switching to ruby in 2007 I've just never felt the need. I think it's probably the cycle time...we were using BEA weblogic as our Java server and cycle times weren't quick so you didn't want to waste time and needed to be able to step through the code. With ruby/rails it's almost instant to see your logging so I've never even looked into how to use a debugger in ruby.
Another ruby dev, back in my java days I used debuggers from jbuilder and eclipse. I now use pry and much prefer it as an experience. I found having everything at your fingertips rather than scattered around a gui far more efficient.
It's almost professional negligence. I had the same situation 15+ years ago when Eclipse was coming out and Java was on the rise. We really didn't know Java all that well and so Eclipse's auto-completion and doc popups were a godsend. The guy sitting there with Vi(m) was bogging down beyond belief. It got to the point, where we had to sit him down and tell him he needs to use the right tool for the job or get lost.
It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
I guess I can't tell if this post is a troll or not. :)
Someone not getting their job done is a problem. Someone using vim as their editor is not. I could understand giving that feedback ("try an IDE") but telling them their job is contingent on it is probably missing the point.
Most of the people I work with seem to use vim/emacs, and it doesn't appear to be for "machismo" reasons. It's likely just what they're used to.
I've been curious in the past, trying to find a reason to use an IDE. For me, the loose coupling of tools that you get when working with Vim in a console is priceless.
My trick (not really a trick) for efficiency is some simple key bindings to navigate between windows in Screen.
And if I really need power I'll run tabs inside of individual Vim instances within each Screen window.
I really think IDE dependence is one of the things that can prevent agility within a team. As soon as you introduce a product into your stack that a person's IDE doesn't have deep integration with, or doesn't play nicely with, all that productivity that person gained by learning that IDE is all of a sudden gone.
I was that guy & never switched to an IDE because I was really good at tearing through text with the vim model. intellij's vim plugin (ideavim) is top notch. Just having auto completion, find definition, find file and rename are worth the price of admission.
>Someone not getting their job done is a problem. Someone using vim as their editor is not.
The best programmers I work with use emacs or vim and are good enough to keep up even though most others use intelliJ ides, but it is a handicap, they are just good enough to overcome it. I have no doubt they would be even better in a good IDE.
>> Do you really think the best programmers spend enough time refactoring for the use of an IDE to help?
> Is refactoring a bad thing now?
It isn't, he just intended that the best programmers refactor less (since they are the best they must get things right or better predict the future more often then the others).
I work on a large C++ codebase in emacs. I find emacs keybindings and extensibility (via elisp) far superior to any IDE - and I have tried Eclipse, CLion (IDEA for C++), Code/Blocks and Netbeans.
For C/C++ specifically, the state of code analysis in emacs has greatly improved due to clang. For example, I get full auto-complete (including for variables declared 'auto') in emacs with irony-mode [0] for code completion, flycheck-irony [1] for on-the-fly syntax checking, and I get jump to definition/find references/etc via clang-ctags [2]. I assume vim has similar extensions.
CLion is only a few months old and can be considered to be a 1.0 release, its already growing more and more capability on each minor update.
I use PhpStorm, cLion and PyCharm, and im happy to pay for them, the editor itself is very high quality and having common tools across all the 3 languages i use is great. If only they did a "golang" IDE.
You can use the Go plugin for almost any IDE based on the IntelliJ Platform, that includes CLion and Android Studio. And the best of all, it's free. Give it a try by installing it: https://github.com/go-lang-plugin-org/go-lang-idea-plugin
> It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
Oh, that's amusing, and rather misinformed. Emacs in particular, when programming Common Lisp or Clojure, has a really impressive environment that Java IDEs can only aspire to (in part, that's an unfair comparison, because CL and Clojure are well suited to interactive programming and Java isn't).
I would be more careful with strong words like "professional negligence" or "hardcore, programmer machismo".
In 2015, do you think most people are using emacs or vim to work with Java or with a Lisp? I think this point stands for the vast, vast majority of people using emacs and vim.
> It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
I feel the same.
Having been brought up with all Borland products, Smalltalk Visual Works and Oberon, I really cannot grasp they keep themselves in a UNIX V7 world, instead of a Xerox PARC one.
And I did use Emacs several years, while deeply missing Borland tooling, as that was the best thing one could look for in UNIX back then (VIM did not exist just VI).
However in 2015, there are so many nice IDEs also available for UNIX...
"It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo"."
What is amazing is that in 2015 there is people out there that believes Vim or Emacs have no intelligent, realtime analysis.
In fact, in year 2015 Emacs and Vim have the best real time, intelligent analysis out there for those that know how to use them and are trained on them.
You can use elisp inside emacs and automate EVERYTHING running circles around anything commercial.
I have a company making software and I know. With the proper training those tools are incredible.
In fact, in year 2015 Emacs and Vim have the best real time, intelligent analysis out there for those that know how to use them and are trained on them.
I develop Cursive, an IDE based on IntelliJ for Clojure code. Its completion and navigation are categorically better than Emacs's, even for a language that supposedly has one of the best levels of support under Emacs.
To say that Emacs is automatically better than everything else is pure Stockholm syndrome.
I have nothing against Cursive, and in fact, have been eying it from afar. But this kind of comment is just needlessly dismissive with little in the way of substance, as far as I can tell. Would you like to expand on what these features that Cursive has that are "Categorically better" than Emacs'?
I'm not saying there isn't any, and certainly I've used better refactoring tools than https://github.com/clojure-emacs/clj-refactor.el in other languages. But what on earth is wrong with the navigation? The main advantage I've seen form a colleague is the highlighting automatically of usages, but I'm pretty sure I can set that up in Emacs too.
I have nothing against Emacs either, and I didn't say that there was anything wrong with its navigation. What I said was the assertion that Emacs and Vim have the best realtime analysis is just wrong - most Emacs modes have none at all.
If you're interested in the implementation of this sort of functionality in Emacs, Steve Yegge's article on implementing JS2 mode is really interesting: http://steve-yegge.blogspot.co.nz/2008/03/js2-mode-new-javas.... He says at the end that IntelliJ's JavaScript support is just better, and it's very clear from the article how difficult it is to retrofit this sort of functionality into something that isn't built from the start to support it.
But you're right that the tone of my comment wasn't great, and in general getting involved in these discussions is just a bad idea. Every so often I see a comment that makes it difficult to resist, but I always regret it.
To answer this specific question - Cursive's Java interop support is much better than anything else I'm aware of. It implements Clojure's type inference in the editor, so method calls are accurately resolved to the right method based on the number and types of the parameters, and completion takes this into account so that you can explore Java APIs almost as well as with Java in IntelliJ. Cross-language navigation and Find Usages works - in Cursive you can navigate from RT.var() calls to the Clojure code, and Find Usages from Clojure will find the RT.var() calls. This also works for other JVM languages - there are quite a few Cursive users with mixed Clojure/Scala codebases, and this all works there.
You can search for all usages of a particular keyword, and it will also find all local bindings destructured from it using :keys. Namespaces will be auto-required during completion based on examples elsewhere in your project, not hard-coded config. This works in the REPL too - you can type str/tr and when it's completed into str/trim Cursive will automatically (require '[clojure.string :as str]) in your REPL.
One other nice thing is that since Cursive works from source, pretty much everything that works for Clojure works for CLJS too.
There's lots more along the same lines, hopefully this gives you an idea. Again, there's nothing wrong per se with Emacs' navigation, but IntelliJ just provides a much more sophisticated infrastructure for it. Obviously elisp is Turing-complete so all this could in theory be implemented but it's much harder, as Yegge describes in the article I linked in my other comment. The clj-refactor guys are doing a great job but the lack of a good indexing infrastructure is going to mean that a lot of this functionality is hard or impossible to implement.
Emacs is a great choice for a lot of people and if you're willing to invest the time to trick it out and maintain it, especially for Clojure you can get a really nice environment. But probably the feature that most people like about Cursive is that it just works out of the box, and stays working with no messing around and no development of your editor required. And WRT my original comment - to say that Emacs has the most sophisticated runtime analysis is just wrong.
Do you realise that emacs supports 'intelligent, realtime code analysis'? I program in Go, and I have auto-completion and documentation.
Emacs is more extensible, and more easily extensible, than Eclipse.
Yes, an IDE is pretty nice for a particular kind of horrible enterprise coding: the sort where no-one understands the system, where everything's horribly over-architected and where, yes, ultra-fast auto-completion is necessary to get anything done. But…why live like that? There're interesting problems and fun environments where one doesn't need to live in that kind of hell.
emacs can handle that, but you have to start asking yourself if it even makes sense to do. A lot of times, those IDEs are just managing spurious complexity, not actually helping build something great.
> where we had to sit him down and tell him he needs to use the right tool for the job or get lost.
One can look at it the other way. If your language needs a million lines of IDE code to function and for programmers to be productive with it, maybe it is time to sit down and tell that language to "get lost".
If you are more productive with a million lines of IDE code backing you up + that language than you would be in Emacs with C++, why would you tell that language to get lost?
That's pure nonsense. There is nothing that prevents languages like Lisp or Smalltalk from being used with a console editor and the compile-pray-debug cycle, but that's no reason to shoot yourself in the foot. This is true of any language, but for some reason there's a large mass of developers of those other languages that see shitty tools and incomplete programs (remember kids, high LOC count means bloat!) as some kind of virtue.
Emacs + slime for Common lisp and tuareg mode for ocaml can do everything that my intellij can do in Java. Then I can extend Emacs to cast a spell, hex vim users and spam HN every time I fixed a bug -- even with plug-ins system I suspect intellij isn't that extensible.
> It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
I think it's mostly that the current state of "intelligent, realtime code analysis" is so terrible that it can be replaced with "rename" and "find references of". I.e. ctags.
Then you have no idea about the current state of intelligent, realtime code analysis. Any old Java IDE should be able to rename length to size on an interface and have all the classes that implement that interface update the name, then have all the places that calls it update to use the new name.
Try that with rename and you will quickly run into trouble, unless you have only one class that has a method called length.
This is still a rename operation. I have difficulty understanding how this would require an IDE for anything but the more complex codebases, and even then, I would much prefer a discrete code server.
Refactoring is still in its infancy. A), it's not entirely possible on C/C++ because of macros. B), they are serialized extremely poorly to diffs, so any boon from refactoring is sidestepped by SCM. C), most of the refactoring done by programmers cannot be done automagically by the IDE (e.g. type signature changes are painful as hell). D), 95% of changes are supported by ctags-like functionality; i.e., a basic identifier index, and re-compiling to fix the bugs.
Given this, I'm pretty sure IDEs have made a lot of progress towards renaming and looking up references really well, but not a lot of progress towards actually being useful.
That said, I do find that when the language supports it, a precise tool to find usages and/or rename is worth its wait in gold. But this is hardly anything near "intelligent, realtime code analysis". Pretty sure that's referring to typeahead, which again is a step removed from a source code index and can be faked by using a regular expression to extract identifiers to suggest.
Using a tool to refactor is better for source control. The diff sucks, but it sucks because it actually found every instance and refactored properly, meaning going through to review it is simpler as long as you trust the tool. We should be using tools for this type of thing, for exactly the same rules as automated formatting should be the default response to any debate on formatting issues.
It would seem that you've not used Resharper. Best refactoring tool I've ever seen. Actually seems like they are forcing the Visual Studio team to pick up their game.
> (e.g. type signature changes are painful as hell)
Resharper handles this very well.
I think it's that if you use Vim and Emacs properly with the right extensions, then they can be pretty amazing, but if you're just cargo culting and use Vim just to say you use Vim, then it won't work.
I use Emacs to write everything. So, in the last 12 months at least C, C++, D, Haskell, Ruby, and Python.
For Python I use rope and jedi. Auto-complete, goto definition and refactoring.
For C/C++ I use my own package cmake-ide. As the name implies, it only works for cmake builds. But it does work. Similarly to Python, autocomplete, on-the-fly syntax highlighting, goto-definition, ...
For D I use another one of my packages, flycheck-dmd-dub.
For Haskell I can't remember now but it'll all very good.
For Ruby I use flycheck, rubocop and rsense.
Not OP, but I use emacs + jedi[1] for Python development.
In addition to code completion, having documentation available to you as you type, and easily navigating code (e.g. jump to definition), you also have all the usual emacs goodies (unlimited undo/redo, easily defining keyboard macros, writing functions for emacs and binding them to keys).
I've had reasonable success with company mode and rtags for C/C++. Fairly straightforward to set up and reasonably accurate. I'd hesitate to call it streets ahead of even Xcode, let alone Visual Studio, but it does roughly what you'd hope for, without too much hassle. Definitely worth turning off company mode's complete-as-you type functionality, though...
After 9 years of using emacs, I still don't find it unambiguously better than Visual Studio for working with C/C++. The text editing is a lot better in emacs, but the code browsing/completion is not - and for large, multi-person projects I've always found the code browsing/completion functionality more important. You can always load files into emacs for one-off operations if you need to do something in particular.
From hat I've seen I'll disagree. The people that use vim efficiently [emacs users just substitute emacs] don't just use vim. They are also very efficient using the surrounding (Unixoid) environment. Specifically file searching...I'm not sure IDEs have an edge there.
These days there's also things like eclim. I don't know it seems like vim style editing is still the best way to get around the codebase and actually work with the text but IDEs have great integration to work with the code. So I think the optimal solution is a "best of both worlds" approach which means a vim-plugin for the IDE or an IDEfeatures-plugin for vim. I see the former setup a lot.
---
and my wild guess is that emacs actually has great code integration for Lisps. I don't know what the state of the art Lisp IDE is but I'd be shocked if it was better than emacs.
It's more than a little ironic that you claim Emacs/vim users use their tool out of 'harcdore, programmer machismo', and you're doing basically the same thing in the opposite direction by accusing them of being unprofessional.
For me, I've used both IDE's and plain ordinary text editors for close to 30 years (the first IDE being QuickBasic 4), and tend to find that IDE's get in the way more often than not. My personal software designs tend to use single components for single functions, so the additional complexities of an IDE's second compiler, second build tool, second VCS front end are less than appealing to me. That said, I've gotten a lot of use in the last 24 hours out of code refactoring the the IntelliJ debugger, so I think the honest answer is to use both approaches if you like.
>It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
You found me out! My preferred environment set up is nothing more than machismo directed at you, a complete stranger. If only I'd had the humility to try an IDE, I could have been enlightened
My eyes hurt from rolling them so hard at your comment.
Emacs, Slime and Common Lisp is something that I would give an exception to. The major productivity I give to that combo is the ability to change the code on the fly while debugging, especially Common Lisp's continue from exception mechanism.
Smalltalk environments have had this ability forever. I just imagine how much more productive I could be if in Visual Studio I could edit-n-continue on everything...even if my code wasn't in a successful compilation state.
Java has this and Jrebel has this on steroids, but I want to be able to edit-n-continue even if my compilation state is broken. Just throw an exception when it fails because of bad state.
Common Lisps exception mechanism is indeed out of this world, but that isn't something that has been implemented in any other language that I know of (not even Clojure) and you have to pay a fortune to get a CL system that works cross-platform with things like threading or a FI that allows lisp callbacks from C code. Java has supported both since forever (and its java.util.concurrent namespace means any old Java programmer has access to all sorts of really advanced non-blocking data structures, including a ConcurrentSkipListMap and an AtomicInteger) and for most people, for almost all tasks, that makes Java better. And for Java you need an IDE.
Borland was also many times JetBrains' headcount at its peak, and even today is significantly larger. It doesn't follow that because a market supports a company of one size, it must also support a company of a much larger size.
Exactly. I also wonder whether it would even have been possible to start Jetbrains in the US. Being based in the Czech Republic gave them a much lower cost structure, and in the beginning, competing with Eclipse's free price tag meant noone could ever have supported a sizable company in the US.
Czech Republic is just a registration place. JetBrains is based in St Petersburg, Russia. Until recently all the developers were in St Petersburg, then some people moved to Germany. JetBrains was started as a small startup and then absorbed people from St Petersburg office of Borland.
JetBrains sells $99 IDEs like in the Turbo Pascal days. The current producer of Borland's Delphi, Embarcadero, sell the product for a minimum of $1000 plus a subscription is now required to get ANY bug fixes at all! Accessing a client/server database is a $500 add-on (or upgrade to the $2000 package), etc. Delphi Professional + c/s add-on plus subscription runs about $2147 the last time I checked!
Who in 2015 pays $2,147 to be able to develop software? Note that you'll need to spend even more to target mobile and there's no Linux solution yet either.
Delphi users refuse to accept it (or that mobile or web are here to stay, or that there are better VCS systems than subversion, which is the only VCS that Delphi's IDE fully supports) but the days of the expensive, proprietary IDE are indeed dead. MS releasing a free VS Community Edition was just the final nail in the coffin.
If professional software developers don't get to have professional IDEs for their languages, then the software quality will continue to stay at the present levels.
The mentality of "good-enough and not even that" is pervasive. When IDEs market is dead, this means the professional pride of the industry as a whole is nonexistent.
(Eclipse does not even come close to amateur level, let alone professional. Vim and emacs which you have to configure yourself to come close to a decent level of proficiency don't count either - I'm using all three)
Slightly off topic but I think they would be a great acquisition target for Google. Acquire, open source, push hard. I think there's tremendous strategic advantage in providing the goto tools for programmers. Could be well worth giving up a nice revenue stream, too.
[it's similar to providing scholarships and other "early access" recruiting measures]
The only way I can see this kind of acquisition make sense would be in order to push Kotlin as a Java replacement on Android.
It looks like Google intends to go the Java 8 way though, so that's not terribly likely.
pretty much was the fast release track that Delphi had and constant feels as its full price for each release. It got stupid expensive for those paying their own way while all these other options appeared at such better pricing.
The packages were structured that some really nice features were bundled with stuff many likely would never use and the upcharge to each tier wasn't small.
A license for IntelliJ for a company is currently 499 euro. Admitably it still gets you much more than JBuilder would have - including support for 7 or so additional languages - but it is not even close to cheap.
If you are buying for yourself, then it is 199 euro which is less than half the price (although that doesn't include VAT).
Only a renewal for personal license. Companies are not allowed to reimburse or pay a personal license. I testify the 1-year initial license for IntelliJ is $499.
Hopefully you pay far less than that now: Their prices have come down. But yes, it used to be a $600 product, that was totally worth it to many developers, because of how much better it was than Eclipse. In a world full of boilerplate like Java, IntelliJ's intentions just saved so much time. I remember asking companies that were using Eclipse to please let me use my own license of IntelliJ, because just what alt + tab and alt + insert brought to the table saved more than the price of admission on any given month. I made a similar argument for a second and even third monitor, given the resulutions we had those days.
What? Every enterprise shop I worked at, devs could pick Eclipse or Idea. I am pretty sure I would be a day 1 walkout if I was told it was an eclipse only shop.
Want to buy Delphi today from EMBT? At minimum you'll need Delphi Professional at over $1000. Want to access something other than SQLite via their database system? Well, to avoid upgrading to the $2000 price point you'll need the $500 Client/Server add-on that also brings things like (awful) JSON support. Note you're out of luck if you want things like HTML parsing, which still isn't in the standard library. You have to get a subscription now to get ANY bug fix updates (!!!), so that's several hundred more dollars. Now you're looking at an entry fee of about $2,147 for one (niche) language which hasn't seen a commercially published book since 2005 (seriously). And it doesn't end there. There's only a crippleware bundle for profiling (no 64bit support), so if you want a top, 64bit profiler that's another pricey $600 or $800 from a third party. There's no documentation generation support, so you're going to need to spend $200-$300 for that from another third party (I've seen $300 options that can't output PDF). HTML parsing? $60. There's an open source mega-framework that has support for ORM and web, but if you don't use that you're again looking at three-digit costs for each. Want matrices and math, statistics or data mining? That's about another $500 apiece with source code. There's no official testing framework and the one major open source one died so now there are many somewhat-compatible options you're going to have to hunt through on SourceForge. Oh, version control? Only full support for Subversion. There's finally some GIT support, but only local - you can't push to github or another central server with it. The IDE is buggy and you can't even redefine all of the keys (!!!!!!!!!! - yes the only IDE on earth that won't let you remap keystrokes) so you need more third party plugins that offer UI fixes, unofficial bug fixes and some remapping support.
It goes on and on. When I considered it for a project the cost came out to almost $5000! Instead we went full open source (including JetBrains' open source version of PyCharm) and got lots of functionality one couldn't find at any price on the Delphi platform - which incidentally is, just now in 2015, beginning to set up package management. Unfortunately it will be tightly controlled by Embarcadero and they've issued all sorts of warnings about not approving code that replaces functionality in Delphi (in short, they're afraid of competition). The community has no control over the language or the product and the diehards that are left treat it like a religion. They have something called "MVPs" who actually sign a contract to never disparage the language or Embarcdero! In exchange for selling their integrity they get free copies of the product. Completely coincidentally, they'll tell you that the product isn't overpriced. They'll also tell you it's used all over but no one talks about it because "it's their secret weapon". Delphi's product manager told me that he fully believes that "Delphi has had a greater impact on the business world than Python ever has". You not only have to pay all that money, you have to deal with the Scientology of programming languages. :-(
So no, you pay far more than for IntelliJ, you have no control over the product, you have no working roadmap, you don't even have RELEASE DATES for new versions. It's a whole other world over there than what the rest of us are used to.
These days Torrent activities are extremely effective thanks to computers like Raspberry Pi and whenever an application appears which is useful but must be paid to be used, a reverse engineered and cracked version of it joins the Torrent network soon.
I think, business of software as a downloadable application is dead. See what Microsoft is doing.
You talk as if this is a recent thing that the Raspberry Pi has enabled. Yet people have been pirating software long before bit-torrent was around; and people have been using torrents, specifically, long before the Raspberry Pi's were a thing. I very much doubt the Raspberry Pi has made any significant impact on the numbers of people who torrent - let alone pirate in the wider scope of distribution mechanisms.
I used Delphi for more tan 10 years, so this is what I believe it went wrong:
* They lost Anders Hejlsberg
* Free compilers got aceptable.
* Lack of back compatibility, new VCL components from one version weren't compatible with older ones, every new versión require to biy the new components.
* To expensive.
* No new books to learn Delphi, they are to old.
For me (Delphi programmer until around 2001), there were several more important factors in the eventual failure of Delphi.
First, at the time Borland (or, briefly, Inprise) tried to move into the "enterprise" market, with expensive, nebulously defined "middleware" products like Midas. This is why they aggressively pursued failed tech like CORBA. It was a disaster, and had nothing to offer those who wanted to get stuff done. They also clung to BDE despite the fact that ODBC was much more mature and fast. Serious users used some ODBC components that someone built.
They forgot that Delphi was a tool to build great apps in, and didn't figure out that Delphi was great on the back end. It was all about GUI apps, but it turns out that it was incredibly productive for non-visual apps, too. With a better strategy, Delphi might have competed with Java, but they missed a lot of what made Java great in the beginning. Support for web development, even sockets, was near-non-existent.
One problem with backend apps in Delphi was that it was painful to work with C libraries. One huge thing they could have done was to make it trivial to generate C bindings, but the best they had was a rather terrible tool to convert header files to Pascal files. (I wrote a much better tool that understood macros and could even translate complex C++ headers like Microsoft's MAPI, but I'm sure Borland could have done even better.)
Delphi also tried to pursue COM (in addition to CORBA) as a distributed programming glue. Delphi 4 even had a typelib editor (which was buggy and horrible), which you needed to use to get any performance at all; the automagic "OLE Automation" support built into the language was awesome but really slow. It turns out that distributed programming with COM was not very mature, and trying to use it with Delphi was painful (though possible).
Borland also got distracted by Linux (not a good fit for a closed-source company) and by C++ (C++Builder could never be as successful as Delphi since they didn't control the language and had to extend the C++ compiler with custom directives to make it work the way Delphi did).
In the end, it's best summarized as: Lost focus, didn't realize what they had, pursued the wrong markets. Being stuck with a proprietary language didn't help, of course.
I agree with everything you're saying here except for one thing: Delphi was amazing for COM programming. In fact, it took me years of reading complaints about how complicated COM programming was (and thinking everybody is nuts) before I first saw an example of COM done with C++ and realized what people meant. We were doing a ton of COM / COM+ programming in Delphi 5 and we've never had any problems.
Having done a ton of COM programming in Delphi, the problem wasn't that it didn't work that well — the language integration was fantastic, and things like QueryInterface and AddRef/Release were done for you — but the support wasn't quite good enough.
The typelib editor, for example, was atrocious in Delphi 4 (maybe it got better in later versions), and if you wanted to use a COM-based library (MAPI, TAPI, OLE DB, etc.) you couldn't do it without the headers. Since so much of COM came from Microsoft, it felt like fighting a losing battle against the official way of doing COM programming (Visual C++, at the time). Ironically COM programming seemed much worse in VC++, given the lack of language integration.
Funny thing is that now, around 20 years later we have .NET going fully AOT with .NET Native (kind of Delphi experience) and C++/CX with XAML (kind of C++ Builder experience).
Delphi was so powerful when Win32 rich clients were king. Pre 1998 the web was not a serious development target for applications for most people. Mac OS was on a pretty large downward spiral. Linux was still under the radar (and changing too rapidly for serious closed source development).
But the web started developing, OS X started to win back the crowd and Linux showed potential (I stopped doing Delphi development when I moved to a linux box).
Since then they have added a lot of the functionality to do cross development, however a lot of it was too late, and worse a lot of it is ugly. I have never found a rich client building experience as good as Delphi. I still think it is one of the best IDE / GUI Builders packages ever assembled. However when I went for a look back at in a few years ago, the price was eye watering, and unfortunately it seems if I want that style of development my best bet now is C#.
A pet theory for why OSX and the web seems to be such friendly is that the early web was considered an extension of print (and later video/audio) media.
And as best i can tell, media production has been long held bastion of Apple.
It likely didn't hurt that OSX is deep down a BSD, and so you could switch out the L in LAMP and be on your way.
Certainly media companies have always worked on Macs, but the platform was actually quite bad for web development in the early days. The colors were wrong, the fonts were wrong, the entire scaling was wrong. Mac users had a defective version of Netscape and a uniquely defective version of IE.
It wasn't until Mac browsers adopted "standards" which matched the Windows way of doing things (96dpi, sRGB, MS fonts) that the Mac was really a viable web development platform.
Still using D7 nearly everyday, alongside XE7 at work. The impressive, yet sad, thing is 2002's Delphi 7 still runs circles around XE7. Fast, robust, sound documentation. Even Visual Studio and IntelliJ feel weird compared to the simplicity of that old IDE. Sure, lots of parts are missing, no refactoring, no reformatting, whatever. For me it's more fun to use D7 than anything else at the moment.
That's not really about the IDE side of Delphi, but the compiler and runtime side of Delphi. It was an absolutely awesome product, targeting the individual developer or small team, writing a custom application. Nothing I've ever worked since compares for that. But it's limitations are the same as Ruby on Rails limitations: Wonderful at one thing, only a headache anywhere else, and as we ask developers to do other things, the tools just stop being quite so good.
Looking at what we do on websites today, and how much it costs to build it, it sure feels like we've been taking steps backwards for quite a while.
Pretty much my view too. Anyone who knew Delphi immediately went "OMG that's Delphi/VCL with C-like syntax!" when they saw C#/.NET ca ~2000. And once Microsoft started giving away VS Express for free in 2005, Borland's original market - IDEs and compilers for the masses - was well and truly dead.
It's easy to blame management for shifting focus to enterprise software, but they had to try doing something new.
1997: "Microsoft also offered Mr. Hejlsberg a $1.5 million signing bonus, a base salary of $150,000 to $200,000 and options for 75,000 shares of Microsoft stock. After Borland's counteroffer last October, Microsoft offered another $1.5 million bonus, the complaint says."
Your description of C#/.NET sounds like it might get me to take another look at C# in the future.
I learned Windows programming on-the-job using Delphi and then thought I should investigate a "more common" C++ environment (I already knew console-based C++). Learning that _Visual_ C++ had no similar RAD environment was really shocking. My resource editor isn't attached to an an automatic code generator? Say what?
After that experience with Visual C++, I'd assumed Microsoft did roughly the same thing with "Visual C#" and never gave it a more than cursory look. I also had the fear that this new .NET thing would be yet another technology they hype then deprecate. What's the official Windows GUI framework this month? Is it Win32 Or maybe MFC? Or was it WPF or WinForms or some new XAML thing? (Sorry, went off on a tangent there...)
Yes, until C#/.NET came out, Delphi was much nicer to work with than either VB or Java. Delphi was originally conceived as a 'VB killer' and thats exactly what it was.
Not sure about the exact timeline, but I would also say that they didn't even try to stay in the "for the masses" market. Delphi 8 and later IMHO didn't offer anything compelling, then there was this confusing new thing called Delphi.NET which was Delphi but different, ... So this space stayed on Delphi 7
Even after not using it for 10 years I miss Delphi a lot. You could click a UI together in a very short time, add some code and you had a running standalone .exe without the need of any runtime dlls. Yes, Delphi it was limited in some ways and the language was not the best, but is was (and in my opinion still is) the best way to easily create a "simple" standalone gui applications.
I remember Delphi and its Microsoft counterpart Visual Basic 1-6. They were the best RAD tools to develop GUI applications. Frontpage/Dreamweaver/GoLive were RAD tools for HTML3-4 and very good too.
I currently miss the rapid development tools we had 15 years ago. There is definitely a need for a HTML5 RAD tool - ideally with the combined functionality of JetBrains WebStorm and Adobes Dreamweaver.
I'd be interested in how that'd be better, today, than C#/WinForms (WPF is nice, but not for quick things). I only used Delphi a little, but C# and WinForms are super nice for clicky things and I don't think you'll find a machine in any sort of common use without at least .NET 2.0 on it, being at least effectively standalone.
I think the biggest problem was that between delphi 7 and delphi 2010 there wasn't a version worth paying for, even if you were an enterprise shop at the core of their delphi target market. All they did was bundle components which you could have bought yourself, and tack new features on the side, while letting the core platform go basically unmaintained. After embarcadero took over, delphi releases actually improved the core again.
I don't buy the argument that the IDE market was dead. If properly managed to cater to enterprise developer's needs, delphi could have still been big. A lot of commercial software got built with delphi not because it was affordable, but because it was better than the alternatives.
I have fond memories of Delphi 3 during my high school years.
I did a marketing internship at Borland France around 2004 and Delphi was still the cash cow but a great deal of effort was put toward "enterprise" softwares like StartTeam, Together etc that were from acquired companies IIRC.
It's easy to judge now but back then the shift toward the web apps was slowly happening, C#/.Net was not especially a success ...
Sadly I always wanted to get into Delphi, but it being unaffordable (even as a student - which JetBrain and MS give me their IDE's for free...) as well as the lack of books or online documentation are a factor. I try using FreePascal with Lazarus, but Lazarus / FP still lack the proper documentation as well sadly.
That's strange, I remember myself buying an Italian dev magazine back at the end of the 90s that included a full copy of Delphi (ok, 16bit version) with application redist rights.
I think I probably paid it the equivalent of 10euro..
One thing that I liked a lot about Delphi (compared to VB for example) that there were plenty of free (sometimes with sources too) VCL components that could be downloaded and used for your own project.
I was also introduced to Delphi from a computer magazine in Denmark, they even used it as a theme in the next magazines where you would follow articles on how to code Delphi.
Same here - in Poland. We had Delphi on school computers, but obviously I couldn't "take it home". So it was really great (and surprising, too) to find it in a magazine.
I remember a few books coming out lately, Coding in Delphi from Nick Hodges comes to mind if you want to look at the latest for what Delphi has to offer.
Compilers used to be something you paid money for. It took a while, but open source and I would say gcc in particular killed that.
It's funny how gcc was around for a long time, but it was only in the 2000s that cash cow compilers started dying. I think that coincides a bit with Linux and OS X becoming popular for developers. For example, it wasn't until 2005 that Microsoft started providing a VS express SKU.
You see Linux start to kill old school commercial Unix (like Sun) around the same time. Probably the same trend.
Compilers used to be something that nobody could afford to work on for free. Mostly because you had to start from scratch and move forward. I don't know if gcc started as an improvement in pcc or not but for its early years it was both functional and atrocious. Comparing its generated code to the Greenhills C compiler you just shook your head and wondered why would anyone ever advocate this stuff?
But the really magical thing about open source is that it never dies. And there is always someone willing to look at the code and fix a bug, or add a feature. And if you had a new architecture and no budget you could not afford the NRE charge of a big compiler company to build a code generator for you. And so it got incrementally better. Bit by bit. And the better it got, the more useful it was, and the more useful it was the more people used it, and then at some point it crossed the point where the economics of using a free compiler and dedicating some staff to fixing the problems you had made more sense than buying a compiler and waiting for the compiler company to fix bugs.
It really is a fascinating thing to consider and I expect that someone could write a very entertaining book about it at some point.
> It really is a fascinating thing to consider and I expect that someone could write a very entertaining book about it at some point.
Someone already did write that book, and that someone is RMS! If you haven't read it already, I highly recommend "Free Software, Free Society". And in the spirit of things, it of course available Freely: https://www.gnu.org/philosophy/fsfs/rms-essays.pdf
(Though I do have a hardcopy which I'd never part with.)
It's funny, just yesterday I was in a thrift store and found a brand new still sealed in shrinkwrap copy of Visual C++ 2.0 (with its totally awesome C++ logo made of 3D plus signs: http://www.amazon.com/Microsoft-Visual-C-2-0/dp/B0016LE9FO) for $2.99! I remember when I was a teenager looking at copies of Visual C++, QuickBasic 4.5, Turbo Pascal, etc. on the shelf of CompUSA and wishing I could afford any of them so I could go beyond messing around with QBasic. Kids today have it so much better with access to great free development tools, and they don't even need to crawl text files on local BBS' to figure out how to use them anymore. :)
I remember in the 90s advocating for things like gcc and the absolute scorn I would receive. "A free compiler!!! What kind of crap must that be??". You paid for compilers, you paid for your version control, you paid for your bug tracker, and that was that.
Interestingly, at that company (a defense contractor) it was the government more than anything that changed that attitude. There were a lot of projects initiated by the DoD designed to test whether Linux and other open software were a good choice. Attitudes slowly came around.
And it (paid=good) is not an entirely unfounded position. There was some really bad OS software, and Visual Studio is still top by some measures (quality of the debugger). But the amount of pain noncompliance of the VS compiler brought was just frustrating. And if you want really fast code on x86 it still makes sense to buy the Intel compilers (C++ and Fortran).
It's important to note that Visual Studio got a lot of love from MS above and beyond what it's revenues would support. Likewise Intel have gcc a lot of love because they needed to get software used to longer pipelines.
If the VS debugger is king of the hill, then it must be really grim out there. At least half the time, I'm using printf debugging because actually running in the debugger brings my entire machine (16 GB RAM, quad-core intel) to a standstill.
I laughed at "...building the software analogs of sewer systems, utility poles, or synthetic hairballs for ceramic cats."
It derives from a Steven Wright Joke:
"All of the people in my building are insane. The guy above me designs synthetic hairballs for ceramic cats. The lady across the hall tried to rob a department store... with a pricing gun... She said, "Give me all of the money in the vault, or I'm marking down everything in the store..."
They started out making Turbo Pascal a great product that anyone could afford, but it was so much cheaper than the "professional tools" everyone else was selling that businesses wouldn't take them seriously - there had to be a catch to the cheap price. So eventually Borland suddenly decided to at least triple the price starting with one of its versions, but it didn't work, they didn't gain ground against Microsoft and they lost the hobbyist developer.
Turbo Pascal was the first software I remember buying in a retail box. Before then it was all copied and school-given software, but something in 15 year old me really felt it was worth buying this tool.
It was. That purchase spawned a career out of my evening hobby. Then, a few years later, when I was mostly using C or VB, and only firing up Pascal for fun now and then, Delphi came out. I was super excited. Until I realized I could barely use it, and none of my local book stores had anything on it that was helpful to read.
By the time the internet came along, and I got into full time development.. .net was out, with "academic" pricing for VS 2003. That purchase brought me real jobs, at real companies.
So, for me.. Borland got me curious, got me hooked, then I switched to tools that got me money. Maybe that's because the ecosystem had changed, but I can't be the only one.
You sound like me. 15 years old and Turbo Pascal was a mind blowing experience for me. Especially in the graphics arena. UCSD physics department hired me to convert their particle collision vector data (Monte Carlo calculations) into a 3D graphics representation. That was a hell of an experience for me.
> By the time the internet came along, and I got into full time development.. .net was out, with "academic" pricing for VS 2003.
The web was around for about decade before Microsoft .NET arrived. In fact I remember coding Delphi in an evaluation copy of the IDE given away on a cover CD from .Net magazine (not to be confused with Microsoft .NET). And then a few years later downloading a pirated copy of a beta release of Visual Studio .NET.
It was actually the web that introduced me to Borland's Windows IDEs. I'd used Turbo Pascal extensively, but then got hooked on Visual Studio once I migrated away from DOS. Then I started seeing talk of these Borland development environments for Windows and thought "I love TP, so why not give these a shot". I actually much preferred those IDE's to Visual Studio as well, but alas my career and personal interest was switching to non-Windows technologies at that point and thus I never really found a practical use for Borland's Windows compilers.
A few years back I did need to throw together a basic Windows app for some clients, but by that point Delphi was dead and I'd forgotten a lot of Pascal's nuances anyway. So I ended up knocking up something in VB.NET; which was actually less painful than I remembered from the .NET 1.0 days. In fact almost pleasurable. But for all of Pascal / Delphi's warts, I did very much prefer that language over any of the iterations of Visual Basic. In fact I think I'd probably go further and say I preferred it over C/C++ as well.
Actually I remember that Delphi was way more advanced on the TCP/IP side compared to Visual Basic. I remember downloading (for free) quite complete TCP/IP VCLs that included http, ftp, icmp etc.. compared to VB that had nothing (at least for free).
VB and VB.NET are different languages. But yeah, Delphi ran circles around VB in nearly every measurable way. I don't think anyone would seriously argue that VB6 was a better language nor had the better tooling. And earlier versions of VB would have only compared even worse.
As for what libraries VB had for networking, there was some HTTP OCX that was bundled with Internet Explorer (and I don't mean the Trident renderer), which was awful. But aside that, there was only a basic wrapper around the Winsock C libraries. To be fair, the Winsock OCX was pretty decent fot what it was, but you were left to write all the host layers (OSI) for yourself.
I am fully aware VB and VB.NET are two different languages :)
Actually at the time I am talking about, .NET has not been invented yet (wikipedia says 2012), and it was invented by the guy cited in this thread as ex chief dev of Delphi:
My post was just to say that Delphi had quite advanced TCP/IP components options compared to other languages of that time (I cited VB because it was supposed to be the "easiest" language of the period)
> Actually at the time I am talking about, .NET has not been invented yet (wikipedia says 2012)
I assume you mean 2002? FYI .NET was available to some developers before 2002, that was just the first non-beta release :)
> I am fully aware VB and VB.NET are two different languages...I cited VB because it was supposed to be the "easiest" language of the period
I mistook your post to reference Visual Basic because I mentioned it in my post where I discussed VB.NET. I say this because opening your post with "actually" suggested you meant your reply as a correction to my comment. So it wasn't clear to me that your post was intended purely as an interesting yet tangential anecdote.
Indeed. So much so that I found it was easier just writing my own HTTP classes on top of Winsock.
But to be fair, we are talking the 90s and HTTP wasn't as ingrained into technologies as it is today. Heck, back then parameterised SQL hadn't been invented; Internet Explorer was basically the only browser (Netscape had largely been crushed and Opera was non-free); most computers still shipped DOS (if just as a bootloader); and classic ASP was a popular server side framework. So it's easy to be critical with hindsight but the whole ecosystem was still maturing.
I have to say that Borland TurboPascal 3.0 (or earlier) was one of the finest pieces of software that I have ever seen. A full fledged Pascal compiler and a Wordstar compatible editor in a 29K binary. Where have those days gone?
In the height of the enterprise transformation, I asked Del Yokam, one of many interim CEOs after Kahn, "Are you saying you want to trade a million loyal $100 customers for a hundred $1 million customers?" Yokam replied without hesitation "Absolutely."
They stopped selling $49.95 compilers with IDEs and tried to be an enterprise company. People still buy IDEs and compilers. If they had kept doing what they were doing and improving their products, they would have been fine. Instead, they wanted the big money and it didn't happen.
Not a single mention of all the dirty tricks by Microsoft?!
What a shallow memory. I remember how Microsoft cornered Borland and others to use some undocumented features of their OS, then make them incompatible. Remember, back in the 90s updating software on mass scale was a PITA, end users were expected to never update.
Also abusing their dominance to aggressively target key developers and contractors, copying any good application in the ecosystem and bundling it.
But SV didn't learn the lesson and we are now in more abusive walled gardens for or mobile phones. And some young people parroting how wonderful Microsoft and Bill Gates are today.
> And some young people parroting how wonderful Microsoft and Bill Gates are today.
Compared to what Microsoft used to be like - recently Microsoft has made some pretty, surprising, awesome moves.
Specifically their open source movement (including .Net). You would never have seen that 10-20 years ago. I'm sure if you walked into Gates's office and said "I think we should open source this" - I have a feeling he would fire you on the spot.
They even now support Linux on their Azure platform - and that's not a it-will-run-but-you-are-on-your-own.
Now, I'm not saying Microsoft is a saint or that I would want to work for them. But considering they didn't take action against the mono or ReactOS projects makes them ok in my book (not that they would have any real legal case - but they could drag those projects through an expensive lawsuit which would just end up with a deal to cease development).
I've said this before (and was downvoted naturally):
Microsoft's new Open Source strategy needs to be viewed with exactly the same amount of suspicion as IBM's and Oracle's.
It may just be a move to reduce their expenses, since they may get unpaid software contributions and testing (Oracle's CEO pretty much admitted this in an interview).
I view it as an attack on the LAMP stack. Mono with self-hosted HTTP throw OWIN seems like a killer option. Let's face it, C# is a much better language for development than Java.
It's not as if Microsoft voluntarily got nicer. They did every dirty thing they could to dominate the PC industry and simply failed. Now they're beginning to steer their leaky ship in a better direction, good for them. For me personally, I'll never be a fan of the company that selfishly held the web back with IE for a decade because they didn't want to compete. Sic semper tyrannis!
What you don't understand is that Bill Gates is from the future. I don't wanna talk about time travel shit. Cause if we start talking about it, then we're gonna be here all day talking about it, making diagrams with straws. It doesn't matter. The point is, Gates grew up in a dystopian world and traveled back in time to divert money from the assholes that created it towards charitable enterprises to fix the parts of humanity that were breaking down. Only time will tell if this strategy will work, but we have all the time in the world (and then some), thanks to time travel.
I partially agree. Much of MS' recent moves are admirable, but they're clearly revivals of sidetracked projects and desperation moves to stay relevant as desktop OS dominance alone is beginning to lose its grip.
I wouldn't be all that snide about them. However, I can't say I find their platforms particularly palatable in general.
I was going to say the same. The move to open source .net is brilliant. I wonder if some enterprising company will try to convert .net into asm.js so we can run it in the browsers.
Some of Borland's problems had nothing to do with Microsoft. A good example is Borland taking 6 months to rewrite Quattro in object-oriented code. They did this in the middle of a heated battle with Microsoft Excel, where both companies were releasing major new features every 6 months. As awesome as OOP is, it is not a user-visible feature. Being MIA for 6 months while engineers OOP'd the code may have cost them the battle. You can read some of this in this story (search for 'object oriented'): http://www.nytimes.com/1993/08/29/business/the-executive-com...
Some of Borland's problems were indeed Microsoft's doing. OWL vs. MFC, for example. OWL was the first object-oriented library for Windows, but MFC eventually won because MFC was always first to support Windows features. OWL lagged behind because Borland did not have access to in-development versions of Windows.
Borland applications such as Quattro Pro also suffered because Borland did not have access to in-development features of Windows such as OLE 2.0, and because of Microsoft bundling productivity applications into an Office "suite".
While some of this is Apple and Android, much of the root of this problem lies in the carriers themselves, not the software makers. Carriers have a long standing history of building very high and tight walled gardens around their networks, the devices on those networks, and even the versions of software that run on those devices. It has actually gotten a ton better since the App and Play Stores have come around. By building those walled garden stores the Apple and Android have pulled a lot of the burden for developers from individual carriers to single platforms. I would much rather have to work with a single walled garden such as the App Store than have to deal with coming up with a version of software for each carrier.
all tech companies do some shady stuff. look how apple sought to bury Adobe because Jobs felt slighted, manufacturers having to install nets to stop factory workers from committing suicide, evading hundreds of billions in paying taxes through Irish tax evasion. Stopping competitors from getting products to market through questionable patents and an army of lawyers. Uber making false calls mess with Lyft's operations. Google approaching Facebook to fix wages, bullying suppliers while suppliers took it for hopes of future business only to find Google built up their own works, and their sitting on Apple's board while secretly developing their own is questionable at best.
I would bet that the skeletons in the closets of these tech companies are far more darker then what we find out about. i guess the short point is, usually the same people who point(ed) at the evils of microsft praise other companies who have done far worse.
No, not all tech companies do illegal things. It's a false equivalence.
* Chinese manufacturers aren't really "Silicon Valley tech companies" and workers there committing suicide has more to do with Chinese culture and poor work practices than technology.
* Keeping money in Ireland is not a crime. The government set up really dumb tax laws, and companies responded rationally to them.
* Patents suck, but again, it's a government issue. The government sets up patent laws, and you have to abide by them. Some companies are more abusive about this than others, but they all have to follow the law.
* Uber making false calls was unethical, and possibly a crime. Uber does suck as a company but not all tech companies are Uber.
* Google didn't approach Facebook to fix wages. Steve Jobs did that. It was illegal and the government fined everyone involved quite a bit of money (although probably not as much as they should have).
* "Google" doesn't sit on Apple's board. Some people from Google used to sit on Apple's board. If there is a conflict of interest they are supposed to resign, and that's exactly what happened when Google started competing with Apple's iPhone.
* Google doesn't bully suppliers, Apple does that (sometimes). Google doesn't manufacture Android phones, they get other companies to do that for them. Yes, even that Nexus phones.
* Microsoft was convicted by the Department of Justice of anticompetitive practices. If I remember correctly there were felony charges. If you don't understand how seriously unethical they were in the 80s and 90s, you haven't been paying attention at all.
It was over IE as well, OMG they included a browser. Laughable now days, right? I used Netscape at the time and never had any issues installing or running it, even when IE hit 95% of the market.
Um, wow, ok. Look, you also have to consider the information that was revealed during the discovery phase of the lawsuit, not just what the DoJ decided to actually pursue.
Oh, and there has been more than one lawsuit to keep up with. A lot more than one lawsuit:
It's really hard to describe the full impact of what it was like to have everyone in the industry working under the constant fear of getting targeted by Microsoft for more than a decade, and the way that this shaped the market and technology as a whole. Summing it up as "LOL they bundled a browser" betrays a really massive ignorance.
Kids on HN don't understand what it was like when Microsoft was the only software company in town with a future. Kind of like Standard Oil back in the Rockefeller days.
Apple was dying, Google and Facebook didn't exist, Netscape had its "air supply" cut off and Borland, Delphi, and IBM looked like they were headed for the trash can of history. It's like trying to explain the Soviet Union to someone who was born in the 1990s. It just seems like a bunch of old men with bad haircuts, how scary could that possibly be?
I remember the conversations people had back then. You had basically 3 business models if you were going to run a software company. It was something like:
1. Stay so small/niche that Microsoft won't notice or care about you.
2. Avoid selling software as your primary source of income. This is basically a variant of #1.
3. Try to get bought by Microsoft. Ha, just kidding! Everybody knows that Microsoft doesn't buy stuff that's 'Not Innovated Here'.
3. Gamble. Hope you corner your market and extract as much value as possible from that market before Microsoft figures out what you're doing and enters your market. Or (later) C&D's you over some patents they have.
The weird thing is that outside the Bay Area (in some parts of Canada, at least), I still see startups recruiting for a #1-like business model, more-or-less: "The best $SOCIAL_MEDIA iPhone app in $CITY" or whatever. They're not actually afraid of Microsoft anymore, but it's like the mentality didn't go away.
Laughable. MS was terrified of competition so they always went after competitors. The only reason it was MS instead of IBM is IBM was scared of more government regulation on their business.
Apple could have easily had this market and more if they'd opened up.
Maybe you didn't work as a developer in the 80s. I used Borland, Watcom and many other vendors along with many other OSes as well. Magically in the mid 1990s you could even run something called Linux on your PC.
Elsewhere in this discussion, finally someone mentioned a very "interesting" trick: Microsoft threw an obscene amount of money at the chief architect of Delphi.
https://news.ycombinator.com/item?id=9713816
The Borland Turbo languages where the Cat's Pajamas.
Microsoft countered with the Quick languages.
Borland made Turbo Pascal for Windows and with Objects and then made Delphi.
Microsoft countered with Visual BASIC.
Borland made Borland C++ and JBuiilder.
Microsoft countered with Visual C++ and Visual J++/J# and then later Visual C#.
The free IDEs and Free compiler languages ate into Borland's sales. Eclipse, Netbeans, IntelliJ, BlueJ, Sublime Text, GNU C/C++, Apple XCode, FreePascal/Lazarus, Ruby/Ruby on Rails, Python, Code::Blocks, etc.
In 2005 Microsoft introduce Visual Studio Express a free version of their development tools.
Like Amiga, Borland had the superior technology, but cheaper/free alternatives undercut their sales.
Mostly it was the free and open source revolution that did Borland in.
In the height of the enterprise transformation, I asked
Del Yokam, one of many interim CEOs after Kahn, "Are you
saying you want to trade a million loyal $100 customers
for a hundred $1 million customers?" Yokam replied
without hesitation "Absolutely."
Any day of the week! Why is this even in question?
I'm saying this from a personal experience running a B2B company for 7 years and switching to B2C model three years ago. I would never go back.
Sure, having a couple of big customers looks like a more stable option at first, but when big ones hit the ground, they hit hard.
The levels of stress are beyond compare.
Of course, there are drawbacks and some things are different. If you aim for large market, you have to invest in marketing/PR much more - but you are allowed to care less on the customer support front.
(I hope I won't be eating my words in a couple of years, but from current POV, it seems much better to have a huge base of small customers than a few big ones).
Not really. Small customers are sensitive to fads and trends, while enterprise customers (particularly in the field of software, as we all know!) will stick with a product well past its sell-by date as long as it "works." You have a lot more time to pivot when you start losing enterprise customers than small customers.
The myopic nature of the beast is really dumb. You want a blend of both. It's no different than managing a portfolio of assets, probably because it is identical to managing a portfolio of assets.
Because a language, and to a lesser extend an IDE, needs an ecosystem and that is based on number of users. This is even more true today - do you think it would be easier to find a book on C# or Ada? A programmer who can code in it? Getting your questions answered on Stack Overflow?
It depends on who you want to be and how you want to sell your product. At the absolute top where Google and Facebook is, there's no choice, there's just a billion customers.
Borland had its moments, Delphi(RAD successor to Turbo Pascal) was a total hit with Delphi 7 being the pinacle, unfortunatelly after that all went down the hill. Later on in 2008 Embarcadero Technologies picked up Delphi and has been selling it ever since.
If anyone is interested, Delphi today is on life support and exists only because it still has a strong base of followers from the Borland time.
The rapid cost of development has pushed Delphi out of reach for younger generations who are not used to paying for development tools.
Embarcadero is still not getting it right. They know how to build great tech but are terrible marketers. Consider for example right now they sell a range of cross platform development tools. But there's no way to view a range of impressive showcase/example apps on each platform to see what can be done. So there's no strong motivation to use their tools.
People evaluating new development tools want to see and be impressed by what can be done with it. That sort of thing just isn't a priority for Embarcadero.
I loved Turbo C. I learned C from it. The manuals were beautiful, lucid and generous. Borland C++ 3.1 was head & shoulders above Programmer's Workbench. Unfortunately it fell apart at 4.5.
Borland specialized in supplying tools that assisted in building windows-only desktop GUIs which would in turn access a database sitting somewhere on a LAN. As soon as the market moved to web GUIs, and hosting on linux, they were dead.
Pro tip: Add an ? to make the annoying quora "login" dialog disappear, www.quora.com/Diary/? even this will make it disappear, no need to add the ?share=1 thing :D
IMO it was the Internet. C and Pascal dominated till 1994, then Java books were everywhere. I loved TP5 (the manual was awesome) but dropped everything to learn Java which promised the world: write once, run everywhere and Internet applications! Woohoo. Then Perl and Javascript were on the scene. WIRED magazine! Very exciting times. Netscape Navigator Gold baby. Delphi was "Microsloth Windoze" not even on my radar.
This is an interesting comment. I think you can actually move from the IDE space and gradually bring in features that are of interest to Enterprise customers, who then adopt your product.
I'm watching Qlikview do this right now - their original product was and still is a client app for Windows. It allows any business user to pull through datasets of pretty much any source as a poor mans ETL, then it allows that user to do BI tasks in a very simple way. Where they are succeeding is that they haven't given up on this market, but are using it to drive interest from inside companies - eventually skunkworks divisions show its value to the business, which then buys the server software and licenses.
It's growing pretty rapidly, and they seem to have a sustainable model. But it's driven by the individual user.
Not having been a developer in the Borland era, the most valuable take-away for me is this statement:
"On paper this may seem like a fairly minor adjustment, if you have the attitude (as Borland executive management had) that developers are a dime a dozen and any developer can be applied to any product or problem space. That may work for technical programming skills but it doesn't work for passion."
Regardless of what the technical abilities of the product, it's a good reminder that a product made great by the hard work of people who believe in it should be mindful to prioritize their ambitions in product decisions. This is particularly relevant in the open source world.
Delphi was very popular in post-USSR countries. But very few people bought it. Probably 90% used illegal copies. I guess most people didn't even realize it, because they bought it on CD and paid $2 for it.
It was a long time ago, and I can't find anything to corroborate this on internet searches, but:
Didn't Borland attempt to charge for the use of a c runtime module? They attempted to profit from software developed using their c compiler. So, not only would they make money selling the compiler, but anyone that used programs written with their compiler would have to pay also.
Somewhere around that time, they lost the whole c compiler market, I think.
Not sure if it's the same thing you're talking about, but waaay back they did a license change for their language products that said something along the lines of anything produced with their compilers belonged to them. There was a huge outcry and pretty much right away they said "oops! our lawyers got a bit happy" and fixed it. I don't recall exactly when that was. Their Pascal was "Borland Pascal" at the time. I think that was an honest mistake, as I doubt anyone there at the time would have thought they'd get away with that.
Borland was beloved company that made tools that made programming enjoyable. Answer covers all the main points, I can only be sad that they don't exist today and can make tools. Delphi with all my good will is simply not the tool I would use today even if it wasn't priced as it is.
Sun ignored the rise of commodity hardware. They were relying on the Internet and Wall Street bubble in the late 90's. When those both popped, a Sun took a big hit.
Anyway, why does answering this help with Borland?
Sun also had an overpriced compiler, to go with the overpriced Unix running on overpriced hardware. Then it saw these replaced with free, free, and cheaper products, respectively.
Sun were, in many ways, the worst of the commercial Unixes, they just had a very loud fan base insisting they were the One True Unix. Consider how long it took to get a journaling filesystem, never mind volume management!
> Anyway, why does answering this help with Borland?
It would be interesting to explore whether or not there are any commonalities in terms of what happened to Sun, Novell, Borland, etc. And if there are, it might be interesting, in turn, to ask if any of those lessons would be useful to contemporary software companies (especially startups, given the audience here at HN).
I suspect that there are some common factors that could be found, but I don't have a good feel for what they are off-hand, aside from falling back on cliches or tautologies.
The common factor would have to be that they were squeezed by both Microsoft and Linux. Sun sold "Enterprise" UNIX until Linux-on-Intel was good enough. Novell sold networking until Windows+AD was good enough (round about the launch of Vista). Borland sold IDEs until free IDEs were good enough, and also traditional client-server dev went out of fashion along with a language migration.
Actually, what made me stop using Borland (C++ Builder) was switching to Linux (going to open source GCC and wxWidgets library) and web (Browsers and PHP getting good enough to build bigger projects in it).
Borland failed to deliver usable solutions for Linux and web at the time and after I got used to new tools I simply did not bother to try anymore.
Because it was a lousy strategy to try to beat Microsoft in its own game - creating developer tools?) Also adapting a huge codebase for each new windows cost too much.
It was for quite similar reasons that Live Picture failed.
We had a good product with fanatically devoted users; graphic artists persisted with LP on Mac OS 9 for years after Mac OS X came into common use.
The problem was that former Apple and Pepsi CEO John Sculley was a major invester. Live Picture's image editing product, also called Live Picture - was regarded as a tool by Wall Street and as Sculley told us one day, "The street does not value tools companies".
So he tried to turn us into some manner of internet company so we could have a big IPO. Really the best he could come up with was that our - admittedly superior - competitor to apple's quicktime VR be used over the web for consumer product research.
He actually showed us a demo that depicted a virtual convenience store shelf in which one could use the mouse to pick up a tube of toothpaste than look it over.
LP original retailed for $4k but at the time it was $600. So he was going to drop a wildly popular six hundred dollar product so we could make a little coin by measuring websurfer response to animated toothpaste?
The $4K to $600 pricing drop was also a serious problem. While $4K was definitely too expensive, dropping the price so abruptly alienated our early adopters.
A while after I left LP, I found a Java memory leak detection tool called I think Optimize-It. And yes garbage collected languages do suffer memory leaks if you don't know what you're doing, often seriously so as when I had to configure a job to reboot a client's server because it kept running out of swap space.
Optimize-It was independently developed and published at first but Borland acquired it, then sold it for quite a lot of money.
While a tool like that is indeed valuable, it's a lot cheaper to just reboot your server every night at midnight.
It should have sold for maybe $200 rather than the thousands of dollars that Borland charged.
I am quite sad as Borland, Live Picture, Microport, Seagate and the Santa Cruz Operation once offered really good tech employment to Santa Cruz County.
There are other companies there now so it's not like there is no work, but attempting to transform what really was a tools company so it would sell during the internet bubble threw well over fifty hard-working, incredibly dedicated and talented people out of work.
A coworker and good friend became homeless and quite desperate. I am pleased to report he did finally find a job and so was able to pay for a place to live but when I spoke to him while he was homeless he had lost all hope of survival. Someone like him should never have been homeless.
Decisions by companies listed at the stock exchange have to be seen from that perspective. Software business is an asset business. Software licences are not apples. For some strange reasons some analysts do believe that a certain kind of asset is more profitable - selling licences lead to a higher turnover. That's how software is seen from the investor's perspective.
Software is craftsmanship + plus 'copy' from the distribution perspective.
That's why open source really provides good results. Open source is free because there is no barter involved. There is no real declining marginal utility. Every technology that was in the position to grow a reputation busts exactly the moment the last user installed a copy ...
The marginal utility has to be introduced by design :). Then you get money ... because you have a price.
See Wikipedia on where Phillipe's successes have been and is doing today: a smart watch running 2 yrs per charge. His horolog tech is being pursued by at least two Swiss firms in the Jura. 'Phillipe the Phoenix` is a facile adaptor I admire.