Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Algorithms
  4. Aide pour un programme langage c

Aide pour un programme langage c

Scheduled Pinned Locked Moved Algorithms
dotnet
40 Posts 10 Posters 83 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T trønderen

    Eddy Vluggen wrote:

    Never, ever, do I code in the local language, as no dog is ever gonna learn Dutch just for maintaining a code base. Ever.

    My idea of a token-style representation of a program is that noone should have to learn Dutch for maintaining your code base. If all they master is English, then the tokens are mapped to their English representation for that programmer. You map the tokens to Dutch when discussing the solution with your Dutch customer. Or to German if the customer (or end user) is German. The idea is not having to learn a different language. Not even English.

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #23

    Adding an interpreter, adds another burden and potential failure point to the tool chain. Plus the requirement for an in-house local language to interpreter translator.

    "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

    1 Reply Last reply
    0
    • T trønderen

      Eddy Vluggen wrote:

      So. If you want to code, you better learn English and not French.

      My concern is not that I have to master English (I guess I do master it far above the level required for programming), but the customer / end user. We programmers have a tendency to lock ourselves up in an ivory tower, where we want to lock the door and work in total isolation (although maybe as a programming team, not as individuals), most certainly isolated from the customer and the users. We refuse to communicate with anyone that doesn't master our tribal language to perfection. I think this is very bad for our profession. We have a lot to learn from those who have the problems/tasks that we are trying to solve. They do not speak our tribal language. We have to speak their language.

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #24

      You're talking about an IT "shop"; individuals and smaller outfits wouldn't be able to function in the outside world with that attitude. The one thing that progress did, was create middle men (i.e. Business analysts) that separated user and "creator". A Technical Lead cannot be a lead without having interacted with the eventual users, IMO.

      "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

      1 Reply Last reply
      0
      • T trønderen

        jschell wrote:

        trønderen wrote:I frequently get the feeling that we programmers actively want our code to be unintelligible for the customer...I think we ought to.

        Rather creative quoting you are doing there! I do not thing we ought to make "our code to be unintelligible for the customer"!

        jschell wrote:

        It has been tried. On large scale and small. And it continues to be tried. But it does not work.

        I know of lots of end user 'macro' languages that exist in different language varieties; even system functions are localized. People with no programming background are capable of adapting applications to their own needs without having to learn English. It certainly works in the small. I do not know of any compiler storing the code as a semi-parsed tree of abstract tokens, applying a concrete syntax only in the presentation for a human developer. I am simply unfamiliar with any other large-scale failed try to localize any tool, whether programming tool or tool for other application areas, where a significant deployment of localized versions was pulled back and replaced with English language versions. If you can point to one example of failure: One failed project does not imply that the principle has no merit. If you are eager to 'prove' that English Is The Answer, you may of course justify you attitude by referring to the failure. Otherwise, you may study the failure to learn why it failed, and what could be done in better ways. An example: The first release of localized Excel formulas did localize function names. In multi-language corporations, you could not share a spreadsheet between those working in an English context with those in a Norwegian context - the function names from the 'other' language were not found. In your approach, it seems like the proper solution would be to force everyone back to English. Rather, a later Excel version replaced the internal representation of system functions (which was by the localized name) with an abstract reference, sort of like 'built-in 37', which was displayed as 'average' in English versions, 'gjennomsnitt' in Norwegian versions. (In a spreadsheet, 'variables' are referenced by row and column, so the problem of localized variable names does not occur.)

        jschell wrote:

        it could even be possible to discuss the code with a customer who is not fluent in English!

        J Offline
        J Offline
        jschell
        wrote on last edited by
        #25

        trønderen wrote:

        I do not thing we ought to make "our code to be unintelligible for the customer"!

        I realize what you are saying.

        trønderen wrote:

        My old mother could not distinguish between a PC

        I didn't claim people were stupid. I never do that. I disdain programmers that think users are stupid. But as I pointed out your idea is not new. COBOL was created with that in mind in that someone besides a programmer could read the code and more easily understand that. The problem however is still that to actually create an application which is complex the details/process requires that someone somewhere must still be a 'programmer'. And all attempts to move that out of the developer space either result is something that only supports simplistic examples or it requires that someone else (like a customer) must then become a 'developer.'

        trønderen wrote:

        I have been teaching '101 Programming' to people who had never before sat down at a computer

        So you were teaching them to be programmers. Not users.

        trønderen wrote:

        I know that users with domain knowledge and experience are very good at understand even tiny little details in a computer solution, if you are willing to listen to them when they tell you something and try to talk in a similar language when you explain your proposals to them.

        I have written requirements for entire systems based on user/customer requests. Designed architectures and designs to meet the needs as they describe. While leading them through the process of not only describing what they want and need but also picking through the parts that they understand but have not verbalized such as (the very common need) of how to handle failure scenarios. But I do that so they can focus on what they do best while others (developers) focus on what they do best.

        1 Reply Last reply
        0
        • L Lost User

          I found that if I deliver software that "I would like to use", it never fails to please. I also write in such a way, that I don't need to create help files. If I needed something, a video screen capture would be all that was needed; perhaps 10 minutes. As for "communicating", I learn the users job, and lingo, to the point I can do it. In English. No new languages were created. (And the end user doesn't care what "coding" language I use; as long as they're happy) And I have no problem talking customers out of software and services, including my own, that they don't need.

          "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

          J Offline
          J Offline
          jschell
          wrote on last edited by
          #26

          Gerry Schmitz wrote:

          I have no problem talking customers out of software and services, including my own, that they don't need.

          My understanding now however is that... Customers need applications to do certain tasks. From the service provider point of view you must provide that so they continue to be a customer. Customers want applications to do certain tasks. From the service provider point of view you must provide that so they become a customer. The two might overlap but certainly sometimes they do not.

          L 1 Reply Last reply
          0
          • J jschell

            Gerry Schmitz wrote:

            I have no problem talking customers out of software and services, including my own, that they don't need.

            My understanding now however is that... Customers need applications to do certain tasks. From the service provider point of view you must provide that so they continue to be a customer. Customers want applications to do certain tasks. From the service provider point of view you must provide that so they become a customer. The two might overlap but certainly sometimes they do not.

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #27

            Quote:

            Customers need applications to do certain tasks. From the service provider point of view you must provide that so they continue to be a customer.

            The customer who wants an applications to do certain tasks is "wrong". They tell me what they "need", and I tell them what "tasks" the application will perform, if any. e.g. (1.) "We send sports statistics for evaluation and reports". It is very slow and we need you to write a new app to speed up the process. All they needed to do was zip the files. (True story). (2.) Streamlining a law office. I sent them happily off using SharePoint Online subscriptions. § You're suggesting you would write them a redundant app and take the money ... because "I must provide it, etc." Not for me you don't.

            "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

            J 1 Reply Last reply
            0
            • L Lost User

              Quote:

              Customers need applications to do certain tasks. From the service provider point of view you must provide that so they continue to be a customer.

              The customer who wants an applications to do certain tasks is "wrong". They tell me what they "need", and I tell them what "tasks" the application will perform, if any. e.g. (1.) "We send sports statistics for evaluation and reports". It is very slow and we need you to write a new app to speed up the process. All they needed to do was zip the files. (True story). (2.) Streamlining a law office. I sent them happily off using SharePoint Online subscriptions. § You're suggesting you would write them a redundant app and take the money ... because "I must provide it, etc." Not for me you don't.

              "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

              J Offline
              J Offline
              jschell
              wrote on last edited by
              #28

              I believe you are talking either about implementation and not features or just about doing a better analysis of finding a solution to a problem that the user stated. Doesn't change what I said however in that the user still is stating needs and wants. They might want the application to be a 100 times faster but they do not need that. They might need a way to enter an external invoice number into the tracking system but they might want that to replace the internal invoice number.

              Gerry Schmitz wrote:

              You're suggesting you would write them a redundant app and take the money

              No that is not what I am suggesting. I am talking primarily about SaaS and the difficulty in creating something that produces a profit (not just revenue) on a continuing basis.

              L 1 Reply Last reply
              0
              • J jschell

                I believe you are talking either about implementation and not features or just about doing a better analysis of finding a solution to a problem that the user stated. Doesn't change what I said however in that the user still is stating needs and wants. They might want the application to be a 100 times faster but they do not need that. They might need a way to enter an external invoice number into the tracking system but they might want that to replace the internal invoice number.

                Gerry Schmitz wrote:

                You're suggesting you would write them a redundant app and take the money

                No that is not what I am suggesting. I am talking primarily about SaaS and the difficulty in creating something that produces a profit (not just revenue) on a continuing basis.

                L Offline
                L Offline
                Lost User
                wrote on last edited by
                #29

                You think you know a lot about how I operate but you don't. I also "build" systems. People will tell me I need to "add this". I show them "if you do it this way, then that works too". Which gets back to: I learn the customer's business ... so I don't build pointless functionality ... just to get paid. And I only work as a "lead", so, I would have a lot to say about your M.O.

                "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

                J 1 Reply Last reply
                0
                • L Lost User

                  You think you know a lot about how I operate but you don't. I also "build" systems. People will tell me I need to "add this". I show them "if you do it this way, then that works too". Which gets back to: I learn the customer's business ... so I don't build pointless functionality ... just to get paid. And I only work as a "lead", so, I would have a lot to say about your M.O.

                  "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

                  J Offline
                  J Offline
                  jschell
                  wrote on last edited by
                  #30

                  Gerry Schmitz wrote:

                  And I only work as a "lead", so, I would have a lot to say about your M.O.

                  Sigh...no idea what that comment has to do with anything that I said. But since it seems to be questioning my competence.... I have been a senior developer for more than 30 years. Including in supervisory roles. I have written requirements, architectures and designs too many times to count. I have analyzed customer requests including official Statements of Work to insure both that the needs of a company (one I was working with/for at the time) and the customer were being met. I have seen others fail to do that - such as the case where one third of the customer company refused to use the new system because of the failure to meet (and even discover) a single need (not a want.) Besides working with customers and have also worked with Sales. Those people that are attempting to convince others to provide new business and not just existing business. I will also note I have worked with at least one consultant that took several hours to present a solution (a complex one) that was really 'cool' but which presented a solution for which the company neither had a need or a want. And when I questioned the assumptions behind that I was told I didn't know what I was talking about. At which point the CEO and founder also spoke up and also noted he had no idea what assumptions the consultant was using either because they did not fit the actual business model. So I will stick with what I was saying. There are 'needs' and 'wants'. They can and do overlap. They can and do often serve different purposes.

                  1 Reply Last reply
                  0
                  • T trønderen

                    Algol68 was explicitly defined for adaptation to different languages: The syntax was defined using abstract tokens that could be mapped to various sets of concrete tokens. This is no more difficult than having a functional API definition with mappings to C++, PHP, Fortran, Java, ... Obviously, to define these mappings, you should both thoroughly understand the API, and of course the language you are mapping to. It is not always a trivial thing to do. When you choose concrete tokens for a programming language, it is not something that you do a Friday night over a few beers. It is professional work, where you must know the semantics of those abstract tokens, and you must know the natural language from which you select your keywords. You must be just as careful when selecting a term as the English-speaking language designers when they select their English terms. If the language defines some tokens as reserved, you must honor that even for your alternate concrete mapping. In your French Algol version, I assume that the source code was maintained in a plain text file (probably in EBCDIC, for IBM in those days), handled by the editor of your choice. Switching between English and French would require a textual replacement. If the source code was rather stored as abstract tokens, maybe even as a syntax tree, it would require an editor specifically made for this format. (Note that you could still have an selection of editors for the same format!) The editor might choose to look up the concrete syntax only for that part of the tree that is at the moment displayed on screen. 'Translation' is done by redrawing the screen, using another table of concrete symbols. This is certainly extremely difficult, probably across the borderline to the impossible, if we insist on thinking along exactly the same tracks as we have always done before, refusing to change our ways even a tiny little bit. I sure can agree that it is fully possible to construct obstacles for preventing any sort of change in our ways of thinking. I am not hunting for that kind. Like you, k5054, I observe that 'It happens, so it must be possible'.

                    J Offline
                    J Offline
                    jschell
                    wrote on last edited by
                    #31

                    trønderen wrote:

                    The syntax was defined using abstract tokens that could be mapped to various sets of concrete tokens.

                    In Computer Science the area of Compiler Theory is very old and very well studied. Your statement is describing something that well designed compilers (and interpreters) already do. Only time I have ever seen a 'compiler' not do that it was coded by someone who had zero training in how the science of Compilers. As I suggested before the problem is not in creating tokens. The problem is in creating the language in the first place such that it is deterministic and second it creating a compiler that can report errors. That last part is the most substantial part of every modern compiler (even toy ones.)

                    trønderen wrote:

                    If the source code was rather stored as abstract tokens,

                    Parsing text into tokens is the first part of what all compilers/interpreters do. Following is one source of the very well known process Compilers already do. Compiler Design - Phases of Compiler[^] What you are describing does not have anything to do with the actual problem. English version of a standard (very standard) part of programming languages

                    if x then y

                    Now the French version

                    si x alors y

                    So in the above for just two natural languages you now have 4 keywords in the language. Lets add Swedish

                    om x så y

                    So for every language added it is reasonable to expect that the number of keywords would be duplicated. Keywords often cannot be used in code both because it makes it much harder for the compiler to figure it out and for it to correctly report on errors. Additionally even when the context allows the compiler to figure it out it does not make it ideal for human maintenance. Consider the following statement. If one was using a different native language to drive the compiler then the following should be legal. But in the english version do you really want to see this code?

                    int if = 0;

                    So not only would the number of keywords increase but the programmer would still need to be aware of all of those keywords while coding. Now besides the increasing number of keywords the

                    T 1 Reply Last reply
                    0
                    • J jschell

                      trønderen wrote:

                      The syntax was defined using abstract tokens that could be mapped to various sets of concrete tokens.

                      In Computer Science the area of Compiler Theory is very old and very well studied. Your statement is describing something that well designed compilers (and interpreters) already do. Only time I have ever seen a 'compiler' not do that it was coded by someone who had zero training in how the science of Compilers. As I suggested before the problem is not in creating tokens. The problem is in creating the language in the first place such that it is deterministic and second it creating a compiler that can report errors. That last part is the most substantial part of every modern compiler (even toy ones.)

                      trønderen wrote:

                      If the source code was rather stored as abstract tokens,

                      Parsing text into tokens is the first part of what all compilers/interpreters do. Following is one source of the very well known process Compilers already do. Compiler Design - Phases of Compiler[^] What you are describing does not have anything to do with the actual problem. English version of a standard (very standard) part of programming languages

                      if x then y

                      Now the French version

                      si x alors y

                      So in the above for just two natural languages you now have 4 keywords in the language. Lets add Swedish

                      om x så y

                      So for every language added it is reasonable to expect that the number of keywords would be duplicated. Keywords often cannot be used in code both because it makes it much harder for the compiler to figure it out and for it to correctly report on errors. Additionally even when the context allows the compiler to figure it out it does not make it ideal for human maintenance. Consider the following statement. If one was using a different native language to drive the compiler then the following should be legal. But in the english version do you really want to see this code?

                      int if = 0;

                      So not only would the number of keywords increase but the programmer would still need to be aware of all of those keywords while coding. Now besides the increasing number of keywords the

                      T Offline
                      T Offline
                      trønderen
                      wrote on last edited by
                      #32

                      jschell wrote:

                      In Computer Science the area of Compiler Theory is very old and very well studied.

                      I sure wish that was true for everybody creating new languages! (Note that I did not refer explicitly to C and all the languages derived from it.)

                      The problem is in creating the language in the first place such that it is deterministic and second it creating a compiler that can report errors. That last part is the most substantial part of every modern compiler (even toy ones.)

                      Reminds me of VAX/VMS: Every message delivered by system software (including compilers) were headed by a unique but language independent numeric code. Support people always asked you to supply the code; the message text could be in any language - they never read that anyway.

                      So for every language added it is reasonable to expect that the number of keywords would be duplicated.

                      You are missing my point completely. Neither if, then, si, alors, om or så, are reserved words in the language. The language would define non-text tokens, call them [if] and [then] if you like, but the representation is binary, independent of any text.

                      Keywords often cannot be used in code both because it makes it much harder for the compiler to figure it out and for it to correctly report on errors.

                      Noone is suggesting that you are allowed to use the binary [if] token as a user defined symbol. The display representation of the binary [if] token could be e.g. as (boldface) if, or as [if], si, [si], om, [o] or some other way to visually highlight that this is not a user identifier but a control statement token. For creation of new control structures, an IDE working directly on a parse tree representation could provide function keys for inserting complete control skeletons. I have been working with several systems working that way, both for data structures, graphic strucures - and for program code, although the latter inserted textual keywords, not binary tokens the way I wish it to do. Once you get out of the habit of thinking of your program as a flat string of 7-bit-ASCII characters, it it actually quite convenient! (You can assign the common structures, like if/else, loops, methods etc. to F1-F13 keys so that you don't have to move your hand over to the mouse for selecting from a menu.)

                      So not only would the numb

                      J 1 Reply Last reply
                      0
                      • T trønderen

                        jschell wrote:

                        In Computer Science the area of Compiler Theory is very old and very well studied.

                        I sure wish that was true for everybody creating new languages! (Note that I did not refer explicitly to C and all the languages derived from it.)

                        The problem is in creating the language in the first place such that it is deterministic and second it creating a compiler that can report errors. That last part is the most substantial part of every modern compiler (even toy ones.)

                        Reminds me of VAX/VMS: Every message delivered by system software (including compilers) were headed by a unique but language independent numeric code. Support people always asked you to supply the code; the message text could be in any language - they never read that anyway.

                        So for every language added it is reasonable to expect that the number of keywords would be duplicated.

                        You are missing my point completely. Neither if, then, si, alors, om or så, are reserved words in the language. The language would define non-text tokens, call them [if] and [then] if you like, but the representation is binary, independent of any text.

                        Keywords often cannot be used in code both because it makes it much harder for the compiler to figure it out and for it to correctly report on errors.

                        Noone is suggesting that you are allowed to use the binary [if] token as a user defined symbol. The display representation of the binary [if] token could be e.g. as (boldface) if, or as [if], si, [si], om, [o] or some other way to visually highlight that this is not a user identifier but a control statement token. For creation of new control structures, an IDE working directly on a parse tree representation could provide function keys for inserting complete control skeletons. I have been working with several systems working that way, both for data structures, graphic strucures - and for program code, although the latter inserted textual keywords, not binary tokens the way I wish it to do. Once you get out of the habit of thinking of your program as a flat string of 7-bit-ASCII characters, it it actually quite convenient! (You can assign the common structures, like if/else, loops, methods etc. to F1-F13 keys so that you don't have to move your hand over to the mouse for selecting from a menu.)

                        So not only would the numb

                        J Offline
                        J Offline
                        jschell
                        wrote on last edited by
                        #33

                        trønderen wrote:

                        I sure wish that was true for everybody creating new languages! (Note that I did not refer explicitly to C and all the languages derived from it.)

                        C? Compiler theory applies to any language (including interpreters.)

                        trønderen wrote:

                        Neither if, then, si, alors, om or så, are reserved words in the language. The language would define non-text tokens, call them [if] and [then] if you like, but the representation is binary, independent of any text.

                        That is a non-starter. The human needs to write the code. Using token representations that the user is responsible for memorizing would not work. If the user at any time uses something like 'if' and 'then' then those are keywords for the language. That is how it works. Just as in native languages it works that way. Changing semantics (english) does not alter the role of what a system that eventually must run code must still do in that it still must convert the keywords into something else. And defining keywords is necessary for any computer language because it is not deterministic otherwise.

                        trønderen wrote:

                        The display representation of the binary [if] token could be e.g. as (boldface) if, or as [if], si, [si], om, [o] or some other way to visually highlight that this is not a user identifier but a control statement token.

                        Errr...no idea what you are talking about. The 'bold' just becomes part of the textual representation of the keyword. No different than requiring that the keyword is in lower case. You seem to think that because you use bold on a keyword that it is no longer a keyword. It doesn't matter how you differentiate the language specification is it still a keyword. And no developer is going to work in a language where they need to make keywords by switching from bold and back.

                        trønderen wrote:

                        Why can't the parser define a binary 'comment' token,

                        Because the content of the comment is NOT the token that tells the compiler that it is comment. The content of the content is what is contained by the comment. So in the following the value of the comment in text not the '//'

                        // A comment in english is useless in french.

                        trønderen wrote:

                        I have been working with third party APIs with French method

                        T 1 Reply Last reply
                        0
                        • J jschell

                          trønderen wrote:

                          I sure wish that was true for everybody creating new languages! (Note that I did not refer explicitly to C and all the languages derived from it.)

                          C? Compiler theory applies to any language (including interpreters.)

                          trønderen wrote:

                          Neither if, then, si, alors, om or så, are reserved words in the language. The language would define non-text tokens, call them [if] and [then] if you like, but the representation is binary, independent of any text.

                          That is a non-starter. The human needs to write the code. Using token representations that the user is responsible for memorizing would not work. If the user at any time uses something like 'if' and 'then' then those are keywords for the language. That is how it works. Just as in native languages it works that way. Changing semantics (english) does not alter the role of what a system that eventually must run code must still do in that it still must convert the keywords into something else. And defining keywords is necessary for any computer language because it is not deterministic otherwise.

                          trønderen wrote:

                          The display representation of the binary [if] token could be e.g. as (boldface) if, or as [if], si, [si], om, [o] or some other way to visually highlight that this is not a user identifier but a control statement token.

                          Errr...no idea what you are talking about. The 'bold' just becomes part of the textual representation of the keyword. No different than requiring that the keyword is in lower case. You seem to think that because you use bold on a keyword that it is no longer a keyword. It doesn't matter how you differentiate the language specification is it still a keyword. And no developer is going to work in a language where they need to make keywords by switching from bold and back.

                          trønderen wrote:

                          Why can't the parser define a binary 'comment' token,

                          Because the content of the comment is NOT the token that tells the compiler that it is comment. The content of the content is what is contained by the comment. So in the following the value of the comment in text not the '//'

                          // A comment in english is useless in french.

                          trønderen wrote:

                          I have been working with third party APIs with French method

                          T Offline
                          T Offline
                          trønderen
                          wrote on last edited by
                          #34

                          jschell wrote:

                          C? Compiler theory applies to any language (including interpreters.)

                          Well, of course. And it sure is a good idea to know at least fundamental compiler theory before you sit down to create a new language, if you want to make a good one. History has shown that not all language makers have had extensive compiler theory background. Hence my comment.

                          The human needs to write the code. Using token representations that the user is responsible for memorizing would not work. If the user at any time uses something like 'if' and 'then' then those are keywords for the language. That is how it works.

                          Once again: Try to liberate yourself from this fixation on a code file always and invariably maintained and stored as a flat string of ASCII characters. Hopefully, you are able to do that in document processing systems: You create a new chapter level two by hitting a function key or making a menu selection, not by inserting e.g. the strings '< h2>' and '< /h2>' in the body text. Sorry about the extra space after the '< 's - it is required here, because this is not a proper document editor. In, say, MS Word, I could have written the markup without any such considerations. In a document processor, there are no reserved text body words, character sequences or characters. There is no law of nature that says there must be keywords / reserved words just because that document is source code for a compiler / interpreter, that structure must be represented by textually - that is not 'how it works'. Any WYSIWYG document processor will prove you wrong.

                          And defining keywords is necessary for any computer language because it is not deterministic otherwise.

                          You certainly need to define a representation for structural elements, but try to understand that once you liberate yourself from the flat-sequence-of-characters mindset, those structure elements need not be alphabetic. In a document processor file, there are no 'keywords' to represent a hierarchical chapter / section structure; the structure is maintained in binary, non-textual format. You could do the same for a program code file. (I said this earlier; it appears necessary to repeat it.)

                          Errr...no idea what you are talking about. The 'bold' just becomes part of the textual representation of the keyword. No different than requiring that the keyword is in lower case.

                          J 1 Reply Last reply
                          0
                          • T trønderen

                            jschell wrote:

                            C? Compiler theory applies to any language (including interpreters.)

                            Well, of course. And it sure is a good idea to know at least fundamental compiler theory before you sit down to create a new language, if you want to make a good one. History has shown that not all language makers have had extensive compiler theory background. Hence my comment.

                            The human needs to write the code. Using token representations that the user is responsible for memorizing would not work. If the user at any time uses something like 'if' and 'then' then those are keywords for the language. That is how it works.

                            Once again: Try to liberate yourself from this fixation on a code file always and invariably maintained and stored as a flat string of ASCII characters. Hopefully, you are able to do that in document processing systems: You create a new chapter level two by hitting a function key or making a menu selection, not by inserting e.g. the strings '< h2>' and '< /h2>' in the body text. Sorry about the extra space after the '< 's - it is required here, because this is not a proper document editor. In, say, MS Word, I could have written the markup without any such considerations. In a document processor, there are no reserved text body words, character sequences or characters. There is no law of nature that says there must be keywords / reserved words just because that document is source code for a compiler / interpreter, that structure must be represented by textually - that is not 'how it works'. Any WYSIWYG document processor will prove you wrong.

                            And defining keywords is necessary for any computer language because it is not deterministic otherwise.

                            You certainly need to define a representation for structural elements, but try to understand that once you liberate yourself from the flat-sequence-of-characters mindset, those structure elements need not be alphabetic. In a document processor file, there are no 'keywords' to represent a hierarchical chapter / section structure; the structure is maintained in binary, non-textual format. You could do the same for a program code file. (I said this earlier; it appears necessary to repeat it.)

                            Errr...no idea what you are talking about. The 'bold' just becomes part of the textual representation of the keyword. No different than requiring that the keyword is in lower case.

                            J Offline
                            J Offline
                            jschell
                            wrote on last edited by
                            #35

                            trønderen wrote:

                            History has shown that not all language makers have had extensive compiler theory background.

                            Who exactly?

                            trønderen wrote:

                            there must be keywords / reserved words just because that document is source code for a compiler

                            You seem to be missing the point. The compiler creates tokens from the key words. The key words exist because humans require them. You are not removing humans from your idea so key words are still required.

                            trønderen wrote:

                            You could do the same for a program code file

                            The key words and the rest of the language definition provides the structure that the compiler then creates. Doesn't matter how you wrap it up the human must still provide the information.

                            trønderen wrote:

                            How is that in a document processor?

                            I have written multiple compilers/interpreters so I do understand how they work. I have also delved into the source code for other compilers and editors. As I already said you are equating the text in a Word document that seems important to humans to be the same as what is important in a programming language. That is simply not true. You analogy is flawed. As I pointed out the text that humans sees in a word document is equivalent to the text in a comment in code.

                            trønderen wrote:

                            I have suggested that we extend the scope to other representation formats:

                            So certainly no one else in 80 years has wondered if there is not a better way to provide for programming so obviously it is up to you to actually create what you are suggesting. Good luck.

                            1 Reply Last reply
                            0
                            • T trønderen

                              Algol68 was explicitly defined for adaptation to different languages: The syntax was defined using abstract tokens that could be mapped to various sets of concrete tokens. This is no more difficult than having a functional API definition with mappings to C++, PHP, Fortran, Java, ... Obviously, to define these mappings, you should both thoroughly understand the API, and of course the language you are mapping to. It is not always a trivial thing to do. When you choose concrete tokens for a programming language, it is not something that you do a Friday night over a few beers. It is professional work, where you must know the semantics of those abstract tokens, and you must know the natural language from which you select your keywords. You must be just as careful when selecting a term as the English-speaking language designers when they select their English terms. If the language defines some tokens as reserved, you must honor that even for your alternate concrete mapping. In your French Algol version, I assume that the source code was maintained in a plain text file (probably in EBCDIC, for IBM in those days), handled by the editor of your choice. Switching between English and French would require a textual replacement. If the source code was rather stored as abstract tokens, maybe even as a syntax tree, it would require an editor specifically made for this format. (Note that you could still have an selection of editors for the same format!) The editor might choose to look up the concrete syntax only for that part of the tree that is at the moment displayed on screen. 'Translation' is done by redrawing the screen, using another table of concrete symbols. This is certainly extremely difficult, probably across the borderline to the impossible, if we insist on thinking along exactly the same tracks as we have always done before, refusing to change our ways even a tiny little bit. I sure can agree that it is fully possible to construct obstacles for preventing any sort of change in our ways of thinking. I am not hunting for that kind. Like you, k5054, I observe that 'It happens, so it must be possible'.

                              J Offline
                              J Offline
                              jsc42
                              wrote on last edited by
                              #36

                              I know that I am late joining this conversation but ... You refer to keywords in Algol 68. Algol 58 (which Algol 60, Coral 66, Algol 68 (R and S), Algol W etc were derived from) just had tokens, as you state. The characters or symbols used to create tokens were an implementation issue, not a design issue. The standards used letter sequences to indicate the uses of the tokens (e.g. begin, end and if) but that was purely for typographic reasons for the specification and did not define how they were to be entered. The version of Algol 60 that I used (ICL 1900) used quoted strings (e.g. 'BEGIN', 'END', 'IF'). The use of braces in C to represent begin and end would have been perfectly acceptable implementations. Some of the uses of (, ), ? and : in Algol 68 were valid actualisations of the begin, end, then and else keywords. I liked the Algol 68 mirror image brackets e.g. ( and ), [ and ], CASE and ESAC, IF and FI; especially as you could also use COMMENT and TNEMMOC. You may have noticed that all of the keywords (not tokens) above are all in uppercase - that is because I worked on 6-bit character machines and lowercase did not exist.

                              T 1 Reply Last reply
                              0
                              • J jsc42

                                I know that I am late joining this conversation but ... You refer to keywords in Algol 68. Algol 58 (which Algol 60, Coral 66, Algol 68 (R and S), Algol W etc were derived from) just had tokens, as you state. The characters or symbols used to create tokens were an implementation issue, not a design issue. The standards used letter sequences to indicate the uses of the tokens (e.g. begin, end and if) but that was purely for typographic reasons for the specification and did not define how they were to be entered. The version of Algol 60 that I used (ICL 1900) used quoted strings (e.g. 'BEGIN', 'END', 'IF'). The use of braces in C to represent begin and end would have been perfectly acceptable implementations. Some of the uses of (, ), ? and : in Algol 68 were valid actualisations of the begin, end, then and else keywords. I liked the Algol 68 mirror image brackets e.g. ( and ), [ and ], CASE and ESAC, IF and FI; especially as you could also use COMMENT and TNEMMOC. You may have noticed that all of the keywords (not tokens) above are all in uppercase - that is because I worked on 6-bit character machines and lowercase did not exist.

                                T Offline
                                T Offline
                                trønderen
                                wrote on last edited by
                                #37

                                I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there. (I have never seen TNEMMOC, but I have seen ERUDECORP. I suspect that it was a macro definition, though, made by someone hating IF-FI and DO-OD. Btw: It must have been in Sigplan Notices around 1980 one guy wrote an article "do-ob considered odder than do-od". The article when on to propose that a block be denoted by do ... ob in the top corners and po ... oq in the lower corners. Maybe it wasn't Sigplan Notices, but Journal of Irreproducible Results :-)). I have come to the conclusion that a better solution is to store the parse tree, and do the mapping to word symbols only when presenting the code on the screen to the developer (with keywords indicating structure etc. read-only - you would have to create new structures by function keys or menu selections). This obviously requires a screen and 'graphic' style IDE, which wasn't available in the 1960s, and which required processing power that wasn't available in the 1960s. Today, both screens and CPU power come thirteen to the dozen. One obvious advantage is that you can select the concrete symbols to suit your needs, mother tongue or whatever. A second advantage is that you never see any misleading indentation etc. - any such thing is handled by the IDE. This is of course closely connected to the third advantage: As all developer input is parsed and processed immediately, and rejected immediately if syntactically incorrect, there is no way to store program code with syntax errors. Of course the immediate parsing requires more power than simply inserting keystrokes into a line buffer, but it is distributed in time: Spending 10 ms CPU for keystrokes separated at least 100 ms apart is perfectly OK (and you do it not per keystroke, but per token). And, you save significant time when you press F5 (that is is VS!) to compile and run your program in the debugger: Some steps that are known for being time consuming are already done. Lexing and parsing are complete. F5 can go directly to the tree hugging stage, doing its optimizations at that level, and onto code generating, and the program is running before your finger is off that F5 key. In my (pre-URL) student days, I read a survey of how various compilers spent its time. One extreme case was a CDC mainframe that spent 60% on its time fetching the next character from the

                                V J 2 Replies Last reply
                                0
                                • T trønderen

                                  I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there. (I have never seen TNEMMOC, but I have seen ERUDECORP. I suspect that it was a macro definition, though, made by someone hating IF-FI and DO-OD. Btw: It must have been in Sigplan Notices around 1980 one guy wrote an article "do-ob considered odder than do-od". The article when on to propose that a block be denoted by do ... ob in the top corners and po ... oq in the lower corners. Maybe it wasn't Sigplan Notices, but Journal of Irreproducible Results :-)). I have come to the conclusion that a better solution is to store the parse tree, and do the mapping to word symbols only when presenting the code on the screen to the developer (with keywords indicating structure etc. read-only - you would have to create new structures by function keys or menu selections). This obviously requires a screen and 'graphic' style IDE, which wasn't available in the 1960s, and which required processing power that wasn't available in the 1960s. Today, both screens and CPU power come thirteen to the dozen. One obvious advantage is that you can select the concrete symbols to suit your needs, mother tongue or whatever. A second advantage is that you never see any misleading indentation etc. - any such thing is handled by the IDE. This is of course closely connected to the third advantage: As all developer input is parsed and processed immediately, and rejected immediately if syntactically incorrect, there is no way to store program code with syntax errors. Of course the immediate parsing requires more power than simply inserting keystrokes into a line buffer, but it is distributed in time: Spending 10 ms CPU for keystrokes separated at least 100 ms apart is perfectly OK (and you do it not per keystroke, but per token). And, you save significant time when you press F5 (that is is VS!) to compile and run your program in the debugger: Some steps that are known for being time consuming are already done. Lexing and parsing are complete. F5 can go directly to the tree hugging stage, doing its optimizations at that level, and onto code generating, and the program is running before your finger is off that F5 key. In my (pre-URL) student days, I read a survey of how various compilers spent its time. One extreme case was a CDC mainframe that spent 60% on its time fetching the next character from the

                                  V Offline
                                  V Offline
                                  Victor Nijegorodov
                                  wrote on last edited by
                                  #38

                                  trønderen wrote:

                                  I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.

                                  Excuse me to put my few pennies in your great discussion, however, I am afraid that in times of ALGOL-60 the "source code files" existed (only) in the punched cards! ;P

                                  T 1 Reply Last reply
                                  0
                                  • V Victor Nijegorodov

                                    trønderen wrote:

                                    I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.

                                    Excuse me to put my few pennies in your great discussion, however, I am afraid that in times of ALGOL-60 the "source code files" existed (only) in the punched cards! ;P

                                    T Offline
                                    T Offline
                                    trønderen
                                    wrote on last edited by
                                    #39

                                    My freshman class was the last one to hand in our 'Introductory Programming' exercises (in Fortran) on punched cards. Or rather: We wrote our code in special Fortran coding forms, and these were punched by secretaries, and the card decks put in the (physical!) job queue. The Univac 1100 mainframe did have an option for punching binary files to cards. I believe that the dump was more or less direct binary, zero being no hole, 1 a whole, 4 columns per 36 bits word. Such cards were almost 50% holes, so you would have to handle them with care! (I never used binary card dumps myself.) The advantage of punch cards is that you had unlimited storage capacity. When we the following year switched to three 16-bit minis, full-screen editor and Pascal, we had 3 * 37 Mbyte for about a thousand freshman students. When the OS and system software had taken its share, each student had access to less than 100 kbyte on the average, and no external storage option, so we had to do frequent disk cleanups (I was a TA at the time, and the TAs did much of the computer management).

                                    1 Reply Last reply
                                    0
                                    • T trønderen

                                      I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there. (I have never seen TNEMMOC, but I have seen ERUDECORP. I suspect that it was a macro definition, though, made by someone hating IF-FI and DO-OD. Btw: It must have been in Sigplan Notices around 1980 one guy wrote an article "do-ob considered odder than do-od". The article when on to propose that a block be denoted by do ... ob in the top corners and po ... oq in the lower corners. Maybe it wasn't Sigplan Notices, but Journal of Irreproducible Results :-)). I have come to the conclusion that a better solution is to store the parse tree, and do the mapping to word symbols only when presenting the code on the screen to the developer (with keywords indicating structure etc. read-only - you would have to create new structures by function keys or menu selections). This obviously requires a screen and 'graphic' style IDE, which wasn't available in the 1960s, and which required processing power that wasn't available in the 1960s. Today, both screens and CPU power come thirteen to the dozen. One obvious advantage is that you can select the concrete symbols to suit your needs, mother tongue or whatever. A second advantage is that you never see any misleading indentation etc. - any such thing is handled by the IDE. This is of course closely connected to the third advantage: As all developer input is parsed and processed immediately, and rejected immediately if syntactically incorrect, there is no way to store program code with syntax errors. Of course the immediate parsing requires more power than simply inserting keystrokes into a line buffer, but it is distributed in time: Spending 10 ms CPU for keystrokes separated at least 100 ms apart is perfectly OK (and you do it not per keystroke, but per token). And, you save significant time when you press F5 (that is is VS!) to compile and run your program in the debugger: Some steps that are known for being time consuming are already done. Lexing and parsing are complete. F5 can go directly to the tree hugging stage, doing its optimizations at that level, and onto code generating, and the program is running before your finger is off that F5 key. In my (pre-URL) student days, I read a survey of how various compilers spent its time. One extreme case was a CDC mainframe that spent 60% on its time fetching the next character from the

                                      J Offline
                                      J Offline
                                      jsc42
                                      wrote on last edited by
                                      #40

                                      [edit] I had written the response below before I noticed that other folks had already replied with similar stories. [/edit]

                                      trønderen wrote:

                                      I guess that source code files were stored as plain text, using the selected set of word symbols, right? So you couldn't take your source file to another machine, with other concrete mappings, and have it compiled there.

                                      That is correct - the source code was hand punched onto cards (80 column). I got quite adept with the multi-fingering buttons for each character. You then put the box of punched cards into a holding area where someone would feed them to the card reader (hopefully without dropping them and random sorting them). Then the job was run and you got a line printer listing delivered to the same holding area (where, hopefully, your card deck was also returned to) - this is the first time that you can see what the texts were that you had written. At University, the turn round time was 1/2 a day; at my first full time job it was nearer a fortnight; so computer run times were an insignificant part of the round-trip time. Before Uni, I had to do coding sheets which were posted to the computer centre (round trip one or two weeks). This added an extra layer of jeopardy - would the cards be punched with the texts written on the coding sheets? The answer was almost invariably 'No' for at least three iterations; so the first run (enabling debugging) could be six weeks later than the date that you wrote the program.

                                      1 Reply Last reply
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Don't have an account? Register

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups