Compiled vs Interpreted - or the Best of Both?

Compiled code is "compiled" from developer's code (using symbols and words we understand) into machine code - which is faster to execute.  Interpreted code is processed by an intermediate processing layer and is converted to machine code "on the fly".

APL has always been interpreted, and provides enormous flexibility, and provides METSIM with the ability to do adaptive programming.  Programs can even write programs to suit the requirements in real time.  

Mainstream programming has returned to interpreted coding.  "Dot net" is the new MS standard, in which the programs code is processed before conversion to machine code.  The intermediate step is used as a security layer.  The intermediate processor is the policeman. 

APL was one of the three first programming languages available in the dot net developers framework - because it was "already there" in the interpreted realm and it was easy to connect it up to the MS intermediate layer.

But some theoretical programming observers may roll their eyes and sneer under the breath at the speed loss of an interpeted language. This is common from people who "know about but do not do".  The cost of a calculation used to be measured in the "machine time" on a time-shared (pay by the processor time used) mainframe.  APL still has system functions for analysing the processing time for each function, and if you have too much time you can use this to save a millisecond or two.  Is it faster to calculate  2+2,  2x2,  or 2^2 ?  Well if you find yourself stuck in this conversation its time to get a taxi and go home.

But maybe you have an algorithm that is very important to you, and it is too slow for your requirements (maybe because of the recursive nature - which is probably unnecessary in APL).  Are you stuck with the compromise because APL is interpreted?

No, because APL+Win provides the system function "Quad wcall"  to call and use a compiled machine code version of that algorithm - which can be written by someone who knows how to implement your algorithm in the most efficient machine code.  Speed increases of a few thousand percent can be achieved - but I think that would be a nightmare in some cases.  Imagine trying to watch the recycle stream convergence report if you model converged thousands of time faster.

For steady state modelling this is not likely to ever be a requirement, but for dynamic, adaptive, optimising systems the use of some machine code may be useful.