The tools at our disposal to punch in source code & then run it seems like topic that would have been interesting in 1990s but irrelevant in this day & age. But being someone who did start programming at the dawn of the said decade and as someone who does see new languages evolve to this day, the armchair exercise did seem worthwhile. Disclaimer: this post is quasi-autobiographical in nature; you have been warned.
Medieval times; the old school
The very early days had vanilla text editors ranging from classical vi on UNIX fame to EDIT.COM from the MS-DOS stables. Their understanding of the programming language was at the same levels of understanding that word-processors have; which is zero. The next evolution (at least as I experienced) was tight, closed, full tool-chains such as the Borland family of IDEs for C, Pascal & C++ or even good old GW-BASIC. Let us look at each of the adjectives used to describe these tools.
First off, they were closed as the underlying compiler or interpreter almost always had their own version of the language extensions that were core to their identity. AT&T SvR4 C had little semblance to Turbo C. Similarly, GW-BASIC was completely different from QBASIC. The IDE always seemed to be tied down to specific language implementation.
Secondly, we call it tight as the closed nature of these systems permitted extreme levels of liberty in addressing a variety of open issues. For example: contextual help for a function existed before languages had standards on expressing documentation. Java was probably the first mainstream language have a docstrings specification. The closed nature of the systems however meant that this specific “tight integration” was restricted to vendor’s standard libraries in the early days. Visual Studio + MFC combination for example introduced its own conventions outside of even their core C++ standard to support something like this. Another examples of tight integration would be interactive debugging.
Lastly, we call it full chain as there was an assumption that compiler/interpreter, the linker, the debugger etc. etc. were all from the same vendor and there was no scope for mix & match.
Medieval times; the contemporaries
The open source movement seemed to have an interesting side of expecting a piece of source code to be meaningful both with multiple tool-chains and also multiple target environments. This broke all the 3 assumptions till date i.e. assumptions around solutions being a closed full-chain was no longer true hence being able to provide super tight integrations became hard. The proverbial C programmer on a *NIX system was coding for any number of compilers, for multiple hardware architectures, and possibly multiple operating systems. This immediately weakened the prospects of having a meaningful IDE. Folks eventually managed to cobble up a weird & complex relationship between their gcc and their ctags and their gmake and their gdb but that never had coherence of say, what a Visual Studio of that age could pull off.
The seeming conclusion was that “open” v/s “closed” systems seemed to take a specific path.
Early modern history
The java camp had also gained some traction at that point and was showing a trend that few people understood well enough. It has (and probably is till date) one of most comprehensive specification on matters beyond the language syntax that made it possible to have multiple, old school, tightly integrated IDEs that had full interoperability on the same project code base.
Present day (2013)
Let us look at the universe that we reached:
- C/C++ loses market dominance, no more Pascal, Delphi or Fortran
- Java & C# collectively dominate the compiled languages market
- And then there is a whole bunch of interpreted languages in the mix
It now appears like the compiled languages folks are the one who use an IDE whereas the on-the-fly coders seem not use it too much (though they might yearn for fancy editors).
My read of the situation
All languages seem to have graduated to decent levels of comprehensive standards. The notion of common language level libraries seems to be defined, constructs for expressing inline documentation seems to be in place, file & module organization is flexible and yet precisely defined, etc. etc. Furthermore, implementations from multiple vendors strive towards a common conformance as opposed to taking pride in vendor specific features.
Yet, there is this one big difference: the level of unambiguous, obvious, expression of intent in the program holds the key. That difference is around static v/s dynamic typing. Auto-completion, contextual help, safe refactoring and highlighting of infeasible operations (eg: trying to invoke a non-existing member function on an object) seemed like things that a smart IDE does for me that a fancy text editor could never do. And static typing seems to be a prerequisite to able to provide these capabilities in its best form. Of course, there is still the investment needed in deep understanding of the programming language in order to provide these tools which is where fancy editors that fundamentally lack this ability hit a wall.
I’d conclude by saying that one should learn to live with fancy editors for languages with dynamic typing and enjoy the benefits of an IDE for a well defined, static language.