Favorite way to categorize programming languages?
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
Memtha wrote:
debunked "compiled" vs "interpreted"
I don't think that "debunked" is quite the best term, but yes, there's nothing about a language itself which means that it must or must not fit into only one of those buckets -- for the most part, any language could be in either or both -- it's just the difficulty of implementation. "Turing-complete" (vs not) and "general purpose" vs "domain specific" are decent attributes though. As well as how rich the set of supported datatypes is.
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
I prefer "strongly typed" and "cr@p". But that's just an opinion based on seeing the maintenance problems you can avoid with strongly typed languages.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
-
Memtha wrote:
debunked "compiled" vs "interpreted"
I don't think that "debunked" is quite the best term, but yes, there's nothing about a language itself which means that it must or must not fit into only one of those buckets -- for the most part, any language could be in either or both -- it's just the difficulty of implementation. "Turing-complete" (vs not) and "general purpose" vs "domain specific" are decent attributes though. As well as how rich the set of supported datatypes is.
-
I prefer "strongly typed" and "cr@p". But that's just an opinion based on seeing the maintenance problems you can avoid with strongly typed languages.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
Do latently-typed or duck-typed languages count as "strongly typed"? This is always a point of confusion for me with strong vs weak distinctions - there is no line in the sand. Except for the outliers (rigorously-typed vs un-typed) you can make arguments for the majority of type systems being both strong and weak in different regards. I think Typescript is a great example of this since it has a gradual, structural type system.
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
and then there's APL
let max = list[0];
for (let i=0; ior the APL version
⌈/
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
-
Do latently-typed or duck-typed languages count as "strongly typed"? This is always a point of confusion for me with strong vs weak distinctions - there is no line in the sand. Except for the outliers (rigorously-typed vs un-typed) you can make arguments for the majority of type systems being both strong and weak in different regards. I think Typescript is a great example of this since it has a gradual, structural type system.
In my opinion: No. Typing should be explicitly visible in the program text, and clearly identified as a type. Polymorphism, through subclasses, is OK. You can force run time type errors through casting, but casting is explicit. As pointed out: No language is absolutely bound to being interpreted or compiled, but strict typing leans quite strongly towards a complete parsing of the source code before execution starts. When you do that, why not go all the way and generate the code? So strong typing leans towards compilation rather than interpretation, although not by definition.
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
My categories are : 1. Languages I like to work with 2. Languages I would like to investigate 3. Languages I have zero interest in Regarding compiled vs. interpreted languages - I wrote my own language once. It started out interpreted and then I changed it to compile to native machine code. That was really fun and resulted in no changes to the language itself. In my opinion, languages are not inherently compiled or interpreted. Their implementations are what make the distinction.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
-
In my opinion: No. Typing should be explicitly visible in the program text, and clearly identified as a type. Polymorphism, through subclasses, is OK. You can force run time type errors through casting, but casting is explicit. As pointed out: No language is absolutely bound to being interpreted or compiled, but strict typing leans quite strongly towards a complete parsing of the source code before execution starts. When you do that, why not go all the way and generate the code? So strong typing leans towards compilation rather than interpretation, although not by definition.
That's really interesting that you consider readability as an attribute of strong typing. Not that it's wrong in any way; just that most attributes I've seen that people have come up with have been more function-oriented. Since your criteria have to do more with form, what's your opinion on inferred type systems like F# and Haskell?
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
Memtha wrote:
Type
Exactly. Is it strong-typed? Can it reflect on itself? Does it support function programming expressions? Does it supported anonymous types and functions? Is it compiled (the classic definition of compiled, IL included.) If yes, I like it. If somewhat yes, I tolerate it. If no to all, I won't use it. That means: C#: yes TypeScript: tolerated yes JavaScript, Ruby, Python, etc: I won't use it. Now, granted, I do like Python for certain things. There's always exceptions to the rule.
Latest Articles:
ASP.NET Core Web API: Plugin Controllers and Services -
and then there's APL
let max = list[0];
for (let i=0; ior the APL version
⌈/
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
-
That's really interesting that you consider readability as an attribute of strong typing. Not that it's wrong in any way; just that most attributes I've seen that people have come up with have been more function-oriented. Since your criteria have to do more with form, what's your opinion on inferred type systems like F# and Haskell?
I have not spent much time with pure functional languages at all - not enough to have any qualified opinion. For one project, 30 years ago, we evaluated Erlang, but rejected it for our use. My main experience with Lisp is from emacs :-) F# and Haskell I know at "Wikipedia level", not from any practical use. My professional upbringing is from the days when software designers still did data modelling. I know it is not comme il faut to state, in 2022, that Entity Relationship modelling has something to be said in favor of it, but I have seen ER being used very successfully in a number of projects to really get a grasp on the problem domain. And, it is an excellent tool for communicating with a customer: They will easily understand the concepts so that they can participate in development of the ER model, and when that is in place, they can successfully teach you the operations to be done on the data. (And sometimes they also realize that the current state of data collections and handling procedures is a mess ...) So I have always been on the data model side, rather than the function oriented one. I guess that is an essential reason why I consider strong typing essential.
-
In my opinion: No. Typing should be explicitly visible in the program text, and clearly identified as a type. Polymorphism, through subclasses, is OK. You can force run time type errors through casting, but casting is explicit. As pointed out: No language is absolutely bound to being interpreted or compiled, but strict typing leans quite strongly towards a complete parsing of the source code before execution starts. When you do that, why not go all the way and generate the code? So strong typing leans towards compilation rather than interpretation, although not by definition.
Quote:
Typing should be explicitly visible in the program text, and clearly identified as a type.
I agree that it should be, but I do not believe this is a requirement for a language to be strongly-typed. Specifically, c++ `auto` and c# `var` break this rule but both are strongly typed just because misusing an object is likely to result in a compiler error.
-
and then there's APL
let max = list[0];
for (let i=0; ior the APL version
⌈/
The less you need, the more you have.
Even a blind squirrel gets a nut...occasionally.
I had never heard of APL but after looking at this and a handful of examples online I firmly place this in my type 2 category because it lacks any form of readability to anyone not very well versed, whereas most programmers who have never touched c++ can probably figure out what the c++ does. Inho those few seconds you save typing will be wasted many times over debugging the swamp of symbols.
-
Memtha wrote:
Type
Exactly. Is it strong-typed? Can it reflect on itself? Does it support function programming expressions? Does it supported anonymous types and functions? Is it compiled (the classic definition of compiled, IL included.) If yes, I like it. If somewhat yes, I tolerate it. If no to all, I won't use it. That means: C#: yes TypeScript: tolerated yes JavaScript, Ruby, Python, etc: I won't use it. Now, granted, I do like Python for certain things. There's always exceptions to the rule.
Latest Articles:
ASP.NET Core Web API: Plugin Controllers and ServicesAgreed with strong-typed and compiled. I personally cannot stand reflection because it tends to loose the benefits of strong-typed because string mangling at runtime can't really be compiler checked. Example: I am presently building (not by choice) a scheduling system that pulls the assembly name, type name and method name from sql, loads the assembly, finds the type and method and runs it. If some bonehead comes along and changes the method signature, no compiler errors will occur but the task will fail at runtime. Anonymous types and functions (and especially lambda and "properties") are perfect examples of the kind of fluff I despise in C#. It does not take that much longer to make proper named classes and functions, and is much more readable and reusable. The extra time will pay for itself later when I don't have to go to the docs to find the implied return type of my function based on what it is being passed to.
As trønderen wrote above:
Typing should be explicitly visible in the program text, and clearly identified as a type.
"
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
functional vs. imperative and general purpose vs. domain specific are my main delineations. beyond that, "interpreted" vs "static compiled" vs "JIT compiled" is technically an implementation detail. Consider Javascript - initially interpreted, now JIT compiled. It's why I don't consider that when I deal in languages For me it's about what I can do with it, not even "how" I do it. Although I do consider the form of typing (duck typing, static typing, etc) when evaluating a language.
Real programmers use butterflies
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
Does it solve my problem? Yes/No C# usually does, JavaScript occasionally does, SQL sometimes does, Regex rarely does... As a PLC programmer you're probably using C or Rust, but as a web developer those aren't really viable choices, C# and JavaScript are. Golf-oriented gibberish never solves my problem, that's more for fun and giggles. That's not to say I like all those languages equally, but I'll learn/use them when I have to.
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
I had never heard of APL but after looking at this and a handful of examples online I firmly place this in my type 2 category because it lacks any form of readability to anyone not very well versed, whereas most programmers who have never touched c++ can probably figure out what the c++ does. Inho those few seconds you save typing will be wasted many times over debugging the swamp of symbols.
Well, APL is quite readable to mathematicians. (Or at least a certain share.) APL was not developed as a programming language at all, but as a notation for teaching the math of matrices. Kenneth Iverson used it as "blackboard language" for a number of years when lecturing at Harvard in the late 1950s. It wasn't until he quit teaching and moved to IBM some of his workmates said something like "Hey, we could make a computer do these operations!" But I must admit that you are right: APL is definitely not a language that you will learn in two weeks. And you must keep on using it, otherwise you will forget so much that you will have to dig up your old APL textbook to find any sense in the APL programs you wrote yourself a few years ago. My first programming language was Basic - a very basic one, with at most 286 variables named A-Z or A0-A9 to Z0-Z9, and 26 string variables A$-Z$. My next language was APL, and I was extremely impressed by its power, compared to Basic. (I was 17 at the time.) I still have my APL textbook, and I'd love to pick it up again - just for doing programming that is very different from at least 95% of all programming done today. (I just need a tuit.)
-
How do you categorize languages? A recent article on a certain mailing list has debunked "compiled" vs "interpreted". I have long stood by my 3-mutually-exclusive-category system: Type 1) "Hey, look what I can do in only 7 lines!" (Python, C#, most new languages etc.) Type 2) "Hey, look what I can do in only 7 characters!" (Perl, awk, golf-oriented gibberish) Type 3) The good ones.
A lack of planning on your part does not constitute an emergency on mine.
My categories are : Do I know it? Do I need to know it? Do I want to know it? Am I sufficiently not lazy to learn it?
-
My categories are : Do I know it? Do I need to know it? Do I want to know it? Am I sufficiently not lazy to learn it?