VB haters, look away
-
I really don't like those feature-vs-feature, mechanism-vs-mechanism, xxyzzy-vs-xyzzy style of comparisons. Looking at each single feature / mechanism / xyzzy in isolation tends to hide their intended use, or established use. It reveals nothing about the "ecosystem" around the language. It allows a Fortran programmer to program Fortran in any language, arguing that (s)he is just using the mechanism provided by the language in a perfectly correct way. Reducing the differences between two languages to mere syntax details can actually be very misleading.
-
Arrays start at 1 - just like when counting your fingers. C# (C, C++...) messed up, who counts anything from zero? It's unnatural, zero simply does not exist.
Sin tack the any key okay
-
To me, as a mathmatically inclined person, it really hurts taking the elevator in our new office building down to the basement: It goes: 4, 3, 2, 1, -1 ...!!! HEY! You dropped something! There is supposed to be something in between there! I am equally upset about Christian churches - I don't know if it applies to all, but at least the Protestants in Europe and the Catholics officcialy number years "..., -2 (i.e 2BC), -1, +1, +2...). There are years before Christ and years after Christ, but no year "of Christ", i.e. the year of of his birth. This hurts my mathematical feelings.
-
It looks like it was more of a legal thing with Sun.. from 2002, [Sun, Microsoft settle Java suit - CNET](https://www.cnet.com/uk/news/sun-microsoft-settle-java-suit/)
Quote:
A Microsoft representative said the dispute lingered for too long. "We don't think anyone wins, but considering the lawsuit has been ongoing for three years, this is a good conclusion to this controversy," said Microsoft spokesman Jim Cullinan. With the deal struck, Cullinan said Microsoft will be allowed to continue to offer its existing Java products, including its popular J++ development tool, for the next seven years. Microsoft product manager Tony Goodhew said the company will include J++ as a separate CD with the next version of Visual Studio.
Now is it bad enough that you let somebody else kick your butts without you trying to do it to each other? Now if we're all talking about the same man, and I think we are... it appears he's got a rather growing collection of our bikes.
We don't think anyone wins
Wrong - the attorneys (always) win.
-
Funny, C# was modeled mostly after Java and C++, but no one ever mentions the Java part.
-
I'm reading a C# book that was recommended on here recently and found this gem in the beginning.
Quote:
The truth of the matter is that many of C#’s syntactic constructs are modeled after various aspects of Visual Basic (VB) and C++. TROELSEN, ANDREW; Japikse, Philip. C# 6.0 and the .NET 4.6 Framework (Kindle Locations 3123-3124). Apress. Kindle Edition.
:-\
There are two kinds of people in the world: those who can extrapolate from incomplete data. There are only 10 types of people in the world, those who understand binary and those who don't.
RyanDev wrote:
The truth of the matter is that many of C#’s syntactic constructs are modeled after various aspects of Visual Basic (VB) and C++.
That's a good thing. It helps a developer make the transition between the two paradigms. The fact that C# and VB.Net are so similar in many ways really makes life easier.
If you think hiring a professional is expensive, wait until you hire an amateur! - Red Adair
-
I'm reading a C# book that was recommended on here recently and found this gem in the beginning.
Quote:
The truth of the matter is that many of C#’s syntactic constructs are modeled after various aspects of Visual Basic (VB) and C++. TROELSEN, ANDREW; Japikse, Philip. C# 6.0 and the .NET 4.6 Framework (Kindle Locations 3123-3124). Apress. Kindle Edition.
:-\
There are two kinds of people in the world: those who can extrapolate from incomplete data. There are only 10 types of people in the world, those who understand binary and those who don't.
-
I haven't been working with compilers for a number of years, so maybe there are younger species out there that do things in a different way - I know the "classical" way of doing it, believing that today's compilers are roughly the same: First, you break the source text into tokens. Then you try to identify structures in the sequence of tokens so that you can form a tree of hiearchical groups representing e.g. functions at some intermediate level, statements at a lower level, terms of a mathematical expression even further down. The term DAG - Directed Acyclic Graph - is commonly used for the parse tree. Nodes in the DAG commonly consist of 3-tuples or 4-tuples in a more or less common format for all nodes: Some semantic / operation code, two or three operands, or whatever else the compiler writer finds necessary. Many kinds of optimisation is done by restructuring the DAG: Recognizing identical sub-trees (e.g. common subexpressions) that need to be done only once, identifying statements that within a loop will have identical effect in every iteration so that sub-tree can be moved out of the loop, etc. etc. Unreachable code is pruned off the DAG. All such operations are done on an abstract level - a variable X is treated as X without regard to its location in memory, number of bits (unless the language makes special requirements) etc. etc. The DAG is completely independent of the word length, byte ordering, 1- or 2-complement arithmetic, register ID or field structure of the instruction code of any specific machine architecture. You may think of variables and locations as sort of still in a "symbolic" form (lots of symbolic labels where never visible in the source code, so this certainly is "sort of"). Once you have done all the restructuring of the DAG that you care for, you may traverse the tree's leaf node to generate the actual machine instructions. (This part of the compiler is commonly called the "back end".) Now you assign memory addresses, use of registers, choose the fastest sequence of machine instructions for that specific machine. You can still do some optimization, e.g. keeping values in registers (now that you know which registers you've got), but it is essentially very local. The DAG indicates which sub-trees are semantically independent of each other, so that you may reorder them, run them in parallell, or e.g. assemble six independent multiplication operations into one vector multiply if your CPU allows. All internal symbolic rerferences can be peeled off; the only symbols retained are exte
All correct, but are you disagreeing with 'C# is compiled to a type of bytecode'?
-
I would tell you what the zeroth. finger is, but, I don't think you are ready, yet.
«Differences between Big-Endians, who broke eggs at the larger end, and Little-Endians gave rise to six rebellions: one Emperor lost his life, another his crown. The Lilliputian religion says an egg should be broken on the convenient end, which is now interpreted by the Lilliputians as the smaller end. Big-Endians gained favor in Blefuscu.» J. Swift, 'Gulliver's Travels,' 1726CE
4!
-
PIEBALDconsult wrote:
0 to n inclusive
Ok, but what's the reasoning behind that? You dimension an array to n elements and get an array with n + 1 elements. This really is interesting. Back in the day I did not use BASIC very much. The interpreters were too slow, especially for graphics. When finally a C compiler fell into my hands (on the Atari ST), I never looked back. The whole thing sounds like a misunderstanding that came when everyone and their dogs started to write BASIC programs on their TRS-80s or later on their C64s.
CodeWraith wrote:
You dimension an array to n elements
No, you dimension it for n+1, as per the spec.
-
Arrays start at 1 - just like when counting your fingers. C# (C, C++...) messed up, who counts anything from zero? It's unnatural, zero simply does not exist.
Sin tack the any key okay
Lopatir wrote:
who counts anything from zero?
Everybody, but many don't realize it. Zero is a perfectly good value for counting. For instance, there are zero elephants in this room. Whenever you count something you always start with zero, then you count the first item as one. It's just so intuitive, you don't really think about it.
-
CodeWraith wrote:
You dimension an array to n elements
No, you dimension it for n+1, as per the spec.
DIM X(5)
n=5, so our array now should have six elements, indexed 0 - 5. Strange way of sying that you want six eggs, but ok. At least we use the same value to dimension the array and the highest valid index. In the end it is a just a question of specifications and conventions. However, the original problem was at the beginning of the array. Of course we could access the array with 1 - 6, but that would be even more confusing. So, where do you think the habit to dimension the arrays one element too large, accessing them with 1 to n and wasting element 0 came into play?
I need a perfect, to the point answer as I am not aware of this. Please don't reply explaining what method overloading is
-
Member 7989122 wrote:
It goes: 4, 3, 2, 1, -1
It says "-1"? That would confuse a lot of people.
It does. This is not a new independent building, but a new wing. The elevators in the old wings go 4, 3, 2, 1, U - the U is for "underetasje", or "sub-floor". (For buildings having two basement levels, it is common to label them U1 and U2.) I guess that the reason why they changed it is that we have a large fraction of foreign employees who don't speak Norwegian, so the management (or elevator constructor?) wanted something language independent. You could say that "U" indicateds "underground", but even an English based abbreviation is sort of language dependent :-). Sure, almost everybody around has at least some understanding of English, but sometimes very little and limited to professional job terms; in the elevator their mind is never tuned in to English. A U is about as good as a Chinese ideograph - just some blurb that makes little sense except symbolizing the basement level.
-
To be fair if you type "bytecode" into google most of the links returned refer to java rather than the more generic usage. The former would suggest a definition of 'type of' where the latter would not require the comparison.
If Google had existed in the early 1980s, a search for "bytecode" would have returned thousands of references to Pascal and its P4 bytecode format. The Pascal compiler was distributed as open source, with a backend for a virtual machine (also available as open source for a couple architectures). You could either adapt the VM to the architecture of your machine, and keep the compiler unchanged, or you could replace the P4 code generating parts of the compiler with binary code generation for your own machine. Actually, lots of interpreters for non-compiled languages of today do some compilation into some sort of bytecode, which is cached up internally so that e.g. a loop body needs to be symbolically analyzed only on the first iteration. But Java is the only language (after Pascal and its P4) to really focus on this, making "Java Virtual Machine" a marketing concept, and really pushing "compile once, run anywhere" as The Selling Point of the language (more so 20 year ago than today). So you are right: Java is very prominent in bytecode references.
-
All correct, but are you disagreeing with 'C# is compiled to a type of bytecode'?
My immediate reaction: Yes, I would disagree. Bytecodes are ready for execution, while the dotNET output from a C# compiler is not. You could say that I am using a "narrow" defintion of the term, but the term could have a more general meaning. Yes, it could - it could mean any code representation that is built up of bytes. Like source code :-). We could even generalize the "byte" concept: The old Univac mainframes could work with 9 bit bytes (4 to the word) or 6 bit bytes (6 to the word), while DEC-10 and DEC-20 had 7-bit bytes (5 to the word and one spare bit). But that is not the commmon "compiler guy" interpretation of "bytecode". The linearized DAG is not directly executable, like a bytecode. Obviously, you could, at run time, do a just-in-time compilation into a bytecode for an interpreter, rather than compiling into native binary code. But at least as far as I know, there are no virtual machines directly interpreting dotNET assemblies with no pre-execution processing step. In my student days, we were a group of students making an attempt to build a direct interpreter for the intermedidate language from another front end compiler (for the CHILL programming language), having a similar architecture. We soon realized that the data structures required to maintain the current exeuction state would be immensly large and complex; the task of building the interpreter would far exceed making a complete backend compiler. You couldn't do without a unified symbol table. You couldn't do without a label-to-location mapping. You couldn't do without a lot of state information for various objects. You couldn't do without ... So we never completed the project. (It was a hobby project, not a course assignment.)
-
My immediate reaction: Yes, I would disagree. Bytecodes are ready for execution, while the dotNET output from a C# compiler is not. You could say that I am using a "narrow" defintion of the term, but the term could have a more general meaning. Yes, it could - it could mean any code representation that is built up of bytes. Like source code :-). We could even generalize the "byte" concept: The old Univac mainframes could work with 9 bit bytes (4 to the word) or 6 bit bytes (6 to the word), while DEC-10 and DEC-20 had 7-bit bytes (5 to the word and one spare bit). But that is not the commmon "compiler guy" interpretation of "bytecode". The linearized DAG is not directly executable, like a bytecode. Obviously, you could, at run time, do a just-in-time compilation into a bytecode for an interpreter, rather than compiling into native binary code. But at least as far as I know, there are no virtual machines directly interpreting dotNET assemblies with no pre-execution processing step. In my student days, we were a group of students making an attempt to build a direct interpreter for the intermedidate language from another front end compiler (for the CHILL programming language), having a similar architecture. We soon realized that the data structures required to maintain the current exeuction state would be immensly large and complex; the task of building the interpreter would far exceed making a complete backend compiler. You couldn't do without a unified symbol table. You couldn't do without a label-to-location mapping. You couldn't do without a lot of state information for various objects. You couldn't do without ... So we never completed the project. (It was a hobby project, not a course assignment.)
Nah, op-codes are ready for execution, bytecodes are not. I like this definition: Bytecode is a form of hardware-independent machine language that is executed by an interpreter. It can also be compiled into machine code for the target platform for better performance.
-
Nah, op-codes are ready for execution, bytecodes are not. I like this definition: Bytecode is a form of hardware-independent machine language that is executed by an interpreter. It can also be compiled into machine code for the target platform for better performance.
If you are right, then terminology is changing. In my book, the op-code is that field in the binary instruction code that indicates what is to be done: Add, shift, jump, ... Usually, the rest of the binary instruction code is operand specifications, such as memory addresses or constants. In more high-level contexts I have seen "op-code" used for a field in a structure, e.g. a protocol. Again, the opcode tells what is to be done (at the level of "withdraw from bank account", "turn on" etc.), the other fields tells with what is it to be done. You suggest a new interpretation, that an op-code is both the 'what to do' and the 'with what to do it'. Maybe that is an upcoming understanding, but certainly not the traditional one. JVM bytecodes are certainly ready for execution, once you find a machine for it. It is easier to build a virtual machine, a simulator, than to build a silicon one. So that is what we do. You can build a translator from MC68000 instructions to 386 instructions. Or from IBM 360 instructions to AMD64 instructions. Or from JVM instructions to VAX instructions. Suggesting that the intention of compiling to MC68K instructions was to serve as an intermediate step to 386 code would be crazy - that was never the intention of the MC68K instruction set. Similarly, the intention of Java bytecodes were not to be translated into another instruction set. If you first compile to one instruction set (including bytecode, such as Java or Pascal P4 bytecode), and then translate to another instruction set, there is generally a loss of information, so that the final code is of poorer quality than if it had been compiled directly from the DAG, which usually contains a lot of info that is lost (i.e. used and then discarded) in the backend. Some of it may be recovered by extensive code analysis, but expect to loose a significant part, in the sense that you will not utilize the target CPU fully. Especially if the first/bytecode architecture has a different register philosophy (general? special?), interrupt system or I/O mechanisms. So, if at all possible, generate the target code from the intermedate level, not from some fully compiled instruction set.
-
One of the "beauties" of the stock C# switch statement was that using integer case qualifiers it compiled to a mean and lean jump-table in CIL. I assume it still does; have yet to see any performance comparisons of use of the new features and other techniques for switch-a-roo. I like the new features.
«Differences between Big-Endians, who broke eggs at the larger end, and Little-Endians gave rise to six rebellions: one Emperor lost his life, another his crown. The Lilliputian religion says an egg should be broken on the convenient end, which is now interpreted by the Lilliputians as the smaller end. Big-Endians gained favor in Blefuscu.» J. Swift, 'Gulliver's Travels,' 1726CE
Note that this is not at all set by the language defintion, but 100% decided by the compiler writer. I have seen compilers (not C#) that generate completely different code depending on the optimization switches: If you optimize for speed, and the case alternatives are sparse, you may end up with a huge jump table. If you optimize for code size and the number of specific cases is small, it might compile like an if ... elseif ... elseif ... When you switch on strings, hash methods are sometimes used to reduce the number of string comparisons that must be done. Compilers may try out several different methods of compiling a switch statements, and assign scores to each alternative based on the compiler options, such as optimization and target architecture. The one with the highest score wins, and is passed on to later compiler stages. This sure slows down compilation, but is generally worth it. A small sidetrack, but closely related: Contrary to common belief, modern standards for sound and image processing, such as MP3 and JPEG, does not specify how the compression is to be done, only how a compressed file is decompressed. A good compressor may try out a handful of different ways to compress the original (sometimes varying wildly in compressed encoding), decompress according to the standard and do a diff with the source material. The alternative with the smallest diff result is selected. (Or, the size of the diff result gives that alternative fewer or more points on the scoreboard, together with e.g. compressed size). The compress-decompress-diff-evaluate sure takes CPU power, but today we have plenty of that, at least for sound and still images.
-
Brent Jenkins wrote:
It looks like it was more of a legal thing with Sun
One of the points of java was that you were supposed to be able to run it on any VM on any supported OS. Microsoft made specific changes to their java that made it impossible to run on different VMs or even compile. CNN - Sun alleges new Java violations by Microsoft - July 13, 1998[^]
Hey, but we've got .NET Core now :-D :thumbsup:
Now is it bad enough that you let somebody else kick your butts without you trying to do it to each other? Now if we're all talking about the same man, and I think we are... it appears he's got a rather growing collection of our bikes.
-
Doesn't C#'s ``foreach`` come from VB's ``For Each``? AFAIK C++ doesn't have any equivalent.
We still miss some loop constructs that are offered by other languages. One that I miss the most is for ListElementPointer in ListHead:NextField .... to traverse a singly linked list, linked through NextField. Then I miss the value set: for CodeValue in 1, 3, 10..20, 32 do ... (a total of 14 iterations). And then, another favorite: for ever do ... In C, I sometimes "simulate" this by #define ever (;;). Finally, I miss the alternative loop exits, where you specify a different loop tail depending on whether the value set was exhausted or the loop was terminated prematurely because some condition was fulfilled: for ... do ... code ... while ... maybe more code ... exitwhile ...code handling premature loop termination... exitfor ...code handling value set exhausted termination... An important aspect of the exitfor/exitwhile is that the code is inside the scope or the for statement, so that e.g. variables decalared within the for is available to the tail processing. If you want to simulate this by setting some termination flag and then break, and after the loop testing: if TerminationCause=exhaustion do else , then what went on in the loop is essentially lost, so this is certainly not a good replacement (and it takes a lot more typing). Finally, for all sorts of nested constructs: I strongly favor that a label identifies a block, rather than a point. I want to be able to do an: exit InnerLoop; or: exit OuterLoop; or even: exit MyProcedure; without the need for setting all sorts of flags that must be tested after InnerLoop and after OuterLoop and from there take new exit to an outer level. You could say that this is little more than syntactic sugar. Sure, but syntactic sugar makes programming sweet.