On 09/30/2009 06:52 PM, Prem Kurian Philip wrote:
From: steve<steve@xxxxxxxxxxxx>
gcc>>when compiling your code to binary ...which is essentially whatIf you *really* want to go that way and do a 'relevant' one-to-one
comparison, you have to include the number of syscalls being made by
the perl>>interpreter is doing for you for 'free'.
the C programs does the same job with 23 system calls
*plus* the number of syscalls it would take for gcc to convert text to
executable code, which is what perl does on your behalf. So, comparing
number of syscalls of the final executable is not a valid measure of
appropriateness of a langauage.
How do you think the python/perl/whatever-interpreted-language works? The
interpreter first reads in the script, parses the code, checks for errors,
compiles it down to byte-code etc and then starts executing the statements
line by line. How is this "free"?
A compiler is required to create the machine code just once - while an
interpreter will need to do this every time a script is done - unless you
are using pre-compilation.
Ultimately, yes, it does come down to the number of instructions which are
executed by a platform for performing a single function. No one in their
right mind will argue that a program written in C wouldn't be faster than
the equivalent program written in any other high level language - unless
the algorithm was implemented poorly.
And yes, the platform you use to run your app on has a pretty drastic
impact on cost - the cost of software, the servers, electricity etc all
add up if you are going to be adding servers.