Are there reasons for beginner programmers to learn C ?
-
C should not be inflicted upon anyone. It is clumsy, slow to code and inelegant, and is a source of the World's most inefficient and buggy programs. It is only due to inertia and legacy that it still survives, mostly within failing companies.
Rob Ford 2 wrote:
C should not be inflicted upon anyone.
It is clumsy, slow to code and inelegant, and is a source of the World's most inefficient and buggy programs. It is only due to inertia and legacy that it still survives, mostly within failing companies.Well, This is an increadibly shortsighted view of things. The New NET type interface is probably very efficient for writing Web Pages. Some of us write Device Drivers. This is a job to be done, every time a new device is being put on the market. Challenge: Try to write a basic Device Driver in Managed Code! Come to it,99% of the OS (e.g Windows 7 or 8) is written in C or CPP. The remainder is written in assembler. What makes your Managed Code run, is ultimately the effort of C and C++ programmers. They have been extremely successful. So Successful in fact, that people like yourself who use those advanced interfaces end up thinking that CPP is Obsolete. To put facts straight:- When you write a piece of NET Code, your result is syntax checked, and, if it passes, compiled into 'Intermediate Code' The compiler that performs this feat, was written in C and CPP.( It would be Impossible to write this sort of compiler in a NET Language) Never mind NET Languges, the compilers for this are still written in C and C++. The RunTime, (that's when you see your code on a screen somewhere) equally attractsOS System based code, originaly written in CPP. In Short: The Crap you write in Managed Code, could never be displayed, if others had not written code in C or CPP, to actually display it!
Bram van Kampen
-
Absolutely not! Code should do the best it can to solve the problem (for your customer.) I'm having this very argument now over writing generic code that can be ported to Linux vs. sharing that code which you can, but tailoring the core to each platform. (The current version is more generic and simply doesn't scale well.) I'm also tired of using programs that suck on every platform, all in the name of being cross-platform.
Hi, Those things sometimes happen! C and CPP are Closest to the Hardware, Managed Code is closest to the Overal Concept. At the same time, Managed code requires Hard Wired Code to Run.
Bram van Kampen
-
I had some initial introduction to Basic in high school - way back in trash-80 (TRS-80) days. But my first and I feel MOST beneficial college class I EVER took was programming logic with pseudo-code and flow charting. After that my first language was C then later C++. If I had tried to learn C++ first I'm not sure I would have understood what was actually happening. When I'm writing code I want to make sure I understand what exact is happening. There is a lot in C++ that was much easier to learn knowing the basic C language.
spotsknight wrote:
If I had tried to learn C++ first I'm not sure I would have understood what was actually happening.
Exactly, it needs to be learned in layers. Students should have a firm grasp of the fundementals before advancing to OOP and such.
-
It's interesting that you mention "assembly". The very history of C as a programming language speaks loudly. Long ago and long ago (as the Native Americans would say), Bell Laboratories purchased a DEC PDP-7 computer. That machine was quite primitive, more or less a "minimal" computer. It had either 4k or 8k memory of 18 bit words, 16 op-codes, no multiply or divide instructions, no index registers, and primitive indirect addressing. It did have a set of memory locations that were "auto-increment" that simulated very primitive index registers, but were not terribly useful over all. The principle I/O were paper tape and a model 33 teletype. In fact, a bare-bones PDP-7 only had a teletype, in which case it would have been a model 35 ASR, which had paper tape read and write included. VERY slow! Bell Labs wrote a language called BPL (for Bell Programming Language, I think) which they used as an alternative to the assembly language supplied by DEC. I've never seen any details on BPL. However, Bell Labs used BPL to write a FORTRAN compiler for the PDP-7, as odd at that might sound. When DEC came out with the PDP-11, Bell Labs bought one and jumped on it like a duck on a June bug! They wrote a translator that converted the BPL translator (probably other programs as well) to run on the PDP-11. Using the translated BPL, they developed a new language, C. From rags to riches in terms of machine language capability, the C designers included features in the language to utilize many of the newly available features of the PDP-11. In particular, the auto-increment/decrement and to-memory instruction modifiers were incorporated in the ++/--/+=/-= operators. The indirection modifiers gave rise to the pointer operators. The above information I learned from a Bell Labs programmer/developer at at DECUS (Digital Equipment Computer User's Society) meeting. He was one of the original creators of C, but unfortunately I forget his name. He told me that when they developed C, they had in mind a "portable assembler" that would allow them to port code to any architecture by merely writing a translator for C for that new machine. Good C programmers, he said, visualized assembly code as they wrote in C. For anyone interested, here is a link to the PDP-11 "card": [^] Given the .NET availability these days, it ma
Hi Mr. Ranshaw, very informative thanks. "He told me that when they developed C, they had in mind a "portable assembler" that would allow them to port code to any architecture by merely writing a translator for C for that new machine. Good C programmers, he said, visualized assembly code as they wrote in C." Yes, yes but they failed [well partially they succeeded] to achieve this nifty goal.
Get down get down get down get it on show love and give it up What are you waiting on?
-
Are there reasons for beginner programmers to be taught C instead of C++? I'm not even thinking about Object Oriented programming, but simple declarative programming. I'm reading a lot of questions on CodeProject and on StackOverflow where people ask about issues with C language features that are so prone to errors and defect that it makes me cringe. A lot of those issues could be handled by simple C++ features (memory management (new/delete, smart pointers), strings, collections, references, ... ) I know there are lot of legacy code out there and it should still be maintained, but old code "ways" should not be the emphasis of the education. :confused:
Watched code never compiles.
Hi Maximilien, >I'm reading a lot of questions on CodeProject and on StackOverflow where people ask about issues with C language features that are so prone to errors and defect that it makes me cringe. Yes this is the bad side of going down to hell, in order to forge a high quality blade you need hellish heath, you know. >Are there reasons for beginner programmers to be taught C instead of C++? Pretty simple: when you treat a child as such you underestimate his/her potential by imposing your limits. If the beginner is afraid to enter deep waters (that is to evolve) it is better not to deal with C at all. >I know there are lot of legacy code out there and it should still be maintained, but old code "ways" should not be the emphasis of the education. What about the old mantra 'data plus algorithms equals programs'? In my view it is absolutely mandatory to know the basics of algorithms, as for the programming languages they come as a natural 'NEXT-STEP'.
Get down get down get down get it on show love and give it up What are you waiting on?
-
Hi Mr. Ranshaw, very informative thanks. "He told me that when they developed C, they had in mind a "portable assembler" that would allow them to port code to any architecture by merely writing a translator for C for that new machine. Good C programmers, he said, visualized assembly code as they wrote in C." Yes, yes but they failed [well partially they succeeded] to achieve this nifty goal.
Get down get down get down get it on show love and give it up What are you waiting on?
"... but they failed [well partially they succeeded] to achieve this nifty goal." In what way did they fail to achieve that goal? More of the history is that Bell Labs developed an operating system for the PDP-7 (for what there was none previously). After they got C running on the PDP-11, they used C to write a translator from BPL (ie, their compiler for the PDP-7). Using that BPL -> C translator, they ported their operating system onto the PDP-11. Of course they had to write assembly code to handle the various low-level drivers. It was that port of their PDP-7 operating system that grew into UNIX(TM). By the way, "UNIX" means "UNIversal eXecutive" according to the Bell Labs guy I talked to. Also witness the various ports of UNIX to a plethora of platforms, all using (as far as I know) some manifestation of C. For example, LINUX and BSD.
-
"... but they failed [well partially they succeeded] to achieve this nifty goal." In what way did they fail to achieve that goal? More of the history is that Bell Labs developed an operating system for the PDP-7 (for what there was none previously). After they got C running on the PDP-11, they used C to write a translator from BPL (ie, their compiler for the PDP-7). Using that BPL -> C translator, they ported their operating system onto the PDP-11. Of course they had to write assembly code to handle the various low-level drivers. It was that port of their PDP-7 operating system that grew into UNIX(TM). By the way, "UNIX" means "UNIversal eXecutive" according to the Bell Labs guy I talked to. Also witness the various ports of UNIX to a plethora of platforms, all using (as far as I know) some manifestation of C. For example, LINUX and BSD.
>In what way did they fail to achieve that goal? I meant the present time, you are into the genesis whereas I am a simple nowadays C user. Assembly is the core, C mimicks it, for example recently (for reference Fastest strstr-like Function in C!? article)I defined a variable as a register and guess what despite the desperate need of this C has had other agenda - I mean C is good Assembly is best. >By the way, "UNIX" means "UNIversal eXecutive" It makes sense, it is strange how such basic/important names are unknown.
Get down get down get down get it on show love and give it up What are you waiting on?
-
>In what way did they fail to achieve that goal? I meant the present time, you are into the genesis whereas I am a simple nowadays C user. Assembly is the core, C mimicks it, for example recently (for reference Fastest strstr-like Function in C!? article)I defined a variable as a register and guess what despite the desperate need of this C has had other agenda - I mean C is good Assembly is best. >By the way, "UNIX" means "UNIversal eXecutive" It makes sense, it is strange how such basic/important names are unknown.
Get down get down get down get it on show love and give it up What are you waiting on?
"I defined a variable as a register and guess what despite the desperate need of this C has had other agenda - I mean C is good Assembly is best." Well, ideally C ought to be kept distinct from processor specific features. Unfortunately, as I mentioned in my first post, such have been part and parcel of the language since its original specification. Or soon after. In fact, I for one would love to see the ++x/--x et al removed from the language, as they ARE based on the capabilities of the PDP-11. That aside, I have ported C to the 6502, 6800, and even the PDP-10 processors, retaining various portions of the language as were feasible on a particular CPU. They worked very well. Yes, I had to rely on assembly code for the "down and dirty" things. But not all CPUs have addressable registers, which would make such a capability in C a tad awkward, to say the least. To quote the Bible (sort of): "Render unto the assembler that which is the assembler's."
-
"I defined a variable as a register and guess what despite the desperate need of this C has had other agenda - I mean C is good Assembly is best." Well, ideally C ought to be kept distinct from processor specific features. Unfortunately, as I mentioned in my first post, such have been part and parcel of the language since its original specification. Or soon after. In fact, I for one would love to see the ++x/--x et al removed from the language, as they ARE based on the capabilities of the PDP-11. That aside, I have ported C to the 6502, 6800, and even the PDP-10 processors, retaining various portions of the language as were feasible on a particular CPU. They worked very well. Yes, I had to rely on assembly code for the "down and dirty" things. But not all CPUs have addressable registers, which would make such a capability in C a tad awkward, to say the least. To quote the Bible (sort of): "Render unto the assembler that which is the assembler's."
Having "flying hours" in Assembly is a precious thing - it marks one's way of thinking for life with a strong base and steady sight on all kind of nasty problems. I see your wish for an untainted language, however I am in despair dealing STILL with such basic things as basic memory management (hash, memmem, b-tree) functions. I firmly believe that building programs on rotten ground (slow basic functions) is out-of-style boring and artless dead-end. Call me out-of-date and delusional but I still have strong romantic affinity towards artistic approach in programming - I hate this doomed situation in which we all are trapped now - fast PCs and slow software not exploiting the might given by the technology - I am sure you feel that even better than me having dealt with assemblers. In a few words: it is a crime not to utilize fully the potential of a given processor - this shows sloppy attitude not only toward programming but also toward everything else. I would like to hear from you how do you see one particular task properly get done: How to rip (down to unique phrases of some order) the whole electronic English language with a C console program (or rather etude). I am talking about my pride-and-joy Leprechaun - the fastest written in C x-gram ripper on the Internet. My point is that out there exist a lot of programming languages but when comes to the most basic things as helping people with natural languages sidekick/statistical/suggestion tools programmers are in debt to users.
Get down get down get down get it on show love and give it up What are you waiting on?
-
Having "flying hours" in Assembly is a precious thing - it marks one's way of thinking for life with a strong base and steady sight on all kind of nasty problems. I see your wish for an untainted language, however I am in despair dealing STILL with such basic things as basic memory management (hash, memmem, b-tree) functions. I firmly believe that building programs on rotten ground (slow basic functions) is out-of-style boring and artless dead-end. Call me out-of-date and delusional but I still have strong romantic affinity towards artistic approach in programming - I hate this doomed situation in which we all are trapped now - fast PCs and slow software not exploiting the might given by the technology - I am sure you feel that even better than me having dealt with assemblers. In a few words: it is a crime not to utilize fully the potential of a given processor - this shows sloppy attitude not only toward programming but also toward everything else. I would like to hear from you how do you see one particular task properly get done: How to rip (down to unique phrases of some order) the whole electronic English language with a C console program (or rather etude). I am talking about my pride-and-joy Leprechaun - the fastest written in C x-gram ripper on the Internet. My point is that out there exist a lot of programming languages but when comes to the most basic things as helping people with natural languages sidekick/statistical/suggestion tools programmers are in debt to users.
Get down get down get down get it on show love and give it up What are you waiting on?
Much of my professional software design/implementation career was filled with the necessity to milk the last smidgen of performance out of some code. The environment was based on DEC PDP-10 processors (KL series) in a commercial time-sharing setting (CompuServe). For the most part, we used a language called BLISS, which was truly a magnificent implementation language. Its optimizer produced code that was probably 99% as good as I could do by "hand". This, coupled with something I've not seen elsewhere in a programming language, lexical processing, was the foundation of a great deal of CompuServe's CIS software. One of my areas involved the error-correcting protocols used for file upload/download. The most used protocol, B Plus, was the cornerstone of what was called HMI, Host Micro Interface. This protocol performed so well that it earned me the "honor" of frequent verbal battles with the OS developers because a B Plus data transfer would hit the "sweet spot" in performnce. This means that by the time a packet was finished being transmitted, the acknowledgement of the previous packet would already be waiting, and several packets could usually be sent in one time-slice. Fun stuff! Natural language processing is something I often pondered, with no real break throughs. I always believed that a different kind of memory access was needed, something akin to the current CPU chips (Pentium) with their cache memory. The cache is a marvel of design. If that technology was to be extended to what I term "content addressable memory", where say a word fetches data by using the actual numeric letter value is the "address" presented to be found. But this is only a beginning of the processing utilized by the human brain. I think that neural network programming might open the door to fast cognitive processing, but we're a loooong way from anything practical along these lines. Way back, the trio of Newel, Simon and Shaw were among the first to contemplate what they termed "Information Processing". One of them said, "The problem with trying to teach a computer to understand natural language is that so few of we humans understand it to begin with." After fifty some years, the situation really hasn't progressed all that much.
-
Much of my professional software design/implementation career was filled with the necessity to milk the last smidgen of performance out of some code. The environment was based on DEC PDP-10 processors (KL series) in a commercial time-sharing setting (CompuServe). For the most part, we used a language called BLISS, which was truly a magnificent implementation language. Its optimizer produced code that was probably 99% as good as I could do by "hand". This, coupled with something I've not seen elsewhere in a programming language, lexical processing, was the foundation of a great deal of CompuServe's CIS software. One of my areas involved the error-correcting protocols used for file upload/download. The most used protocol, B Plus, was the cornerstone of what was called HMI, Host Micro Interface. This protocol performed so well that it earned me the "honor" of frequent verbal battles with the OS developers because a B Plus data transfer would hit the "sweet spot" in performnce. This means that by the time a packet was finished being transmitted, the acknowledgement of the previous packet would already be waiting, and several packets could usually be sent in one time-slice. Fun stuff! Natural language processing is something I often pondered, with no real break throughs. I always believed that a different kind of memory access was needed, something akin to the current CPU chips (Pentium) with their cache memory. The cache is a marvel of design. If that technology was to be extended to what I term "content addressable memory", where say a word fetches data by using the actual numeric letter value is the "address" presented to be found. But this is only a beginning of the processing utilized by the human brain. I think that neural network programming might open the door to fast cognitive processing, but we're a loooong way from anything practical along these lines. Way back, the trio of Newel, Simon and Shaw were among the first to contemplate what they termed "Information Processing". One of them said, "The problem with trying to teach a computer to understand natural language is that so few of we humans understand it to begin with." After fifty some years, the situation really hasn't progressed all that much.
Thanks, despite the difference between us (me being an amateur) I share your vision. The powerful (graph) algorithms implemented by some real programmers is what fascinates me. I have this romantic expectation for so long. Sadly my knowledge of graphs is next to nothing, because of that I wonder how the/my brute-force i.e. dummy approach would handle (in my calculations) some 5 billion four-word-phrases i.e. 4-grams. >After fifty some years, the situation really hasn't progressed all that much. Fully agree, a change (at least some small but firm steps) is needed otherwise we all look like a monkey playing with a playstation. Very glad to learn from you, best regards.
Get down get down get down get it on show love and give it up What are you waiting on?
-
Thanks, despite the difference between us (me being an amateur) I share your vision. The powerful (graph) algorithms implemented by some real programmers is what fascinates me. I have this romantic expectation for so long. Sadly my knowledge of graphs is next to nothing, because of that I wonder how the/my brute-force i.e. dummy approach would handle (in my calculations) some 5 billion four-word-phrases i.e. 4-grams. >After fifty some years, the situation really hasn't progressed all that much. Fully agree, a change (at least some small but firm steps) is needed otherwise we all look like a monkey playing with a playstation. Very glad to learn from you, best regards.
Get down get down get down get it on show love and give it up What are you waiting on?
Well, we are a very long way from the HAL computer of "2001", that's for sure. There are several components to the problem of computer "intelligence". The first problem is one of storage capacity. The human brain is made up of about one trillion neurons. Each neuron is linked with from 10 to 10,000 other neurons. The number of pathways is in the vicinity of 10 to the 100th power. That's ten followed by 100 zeros, a might huge number! It exceeds the estimated number of particles in the entire universe! The brain stores everything in this massive network of neurons in the form of nerve impulses that traverse one or more of these pathways. On top of that, the individual circulating thoughts (or memories, or whatever) are also connected by other pathways of circulating nerve impulses. In computer terms, the brain is an organic associative memory. Add to the above the existence of emotions, images, sounds, smells, and all of the other senses, all of which are remembered in the same way. Thus, when you hear the word "rose" your thoughts instantly conjure up an image of a rose, it's smell, the fact that the plant has thorns, a memory of the time you gave your mother a rose and whe hugged you, perhaps of a girl to whom you gave a rose and she kissed you. All of this happens in a flash, filling your consciousness with great feelings. So the question is, how do we accomplish this with "artificial intelligence"? It beats me! Animals are no different. They possess memories and emotions, even love, strange as that might seem. When I was in high school, I kept an aquarium. One of my favorite fish was a male beta "fighting" fish. (The only thing it will fight with is another male beta.) We used to catch flies and toss them into the tank, whereupon the beta would swim over and gulp it down. During the winter, small amounts of ground meat replaced the flies. After a while of doing this, the beta would swim over to the side of the tank whenever I approached. He would accept dead flies or ground me from my fingers, and even let me gently stroke his sides. But only for me. My parents would not receive the same acceptance. I believe that silly fish loved me! I think that the key lies in what we call experience. As we mature, we undergo the slow implanting of our conglomerate memories and associations. In order to produce a HAL, it would have to undergo a "growing up" process. And there are so many variables that it staggers the mind, and surely would tax the brains of several billions of pro
-
Well, we are a very long way from the HAL computer of "2001", that's for sure. There are several components to the problem of computer "intelligence". The first problem is one of storage capacity. The human brain is made up of about one trillion neurons. Each neuron is linked with from 10 to 10,000 other neurons. The number of pathways is in the vicinity of 10 to the 100th power. That's ten followed by 100 zeros, a might huge number! It exceeds the estimated number of particles in the entire universe! The brain stores everything in this massive network of neurons in the form of nerve impulses that traverse one or more of these pathways. On top of that, the individual circulating thoughts (or memories, or whatever) are also connected by other pathways of circulating nerve impulses. In computer terms, the brain is an organic associative memory. Add to the above the existence of emotions, images, sounds, smells, and all of the other senses, all of which are remembered in the same way. Thus, when you hear the word "rose" your thoughts instantly conjure up an image of a rose, it's smell, the fact that the plant has thorns, a memory of the time you gave your mother a rose and whe hugged you, perhaps of a girl to whom you gave a rose and she kissed you. All of this happens in a flash, filling your consciousness with great feelings. So the question is, how do we accomplish this with "artificial intelligence"? It beats me! Animals are no different. They possess memories and emotions, even love, strange as that might seem. When I was in high school, I kept an aquarium. One of my favorite fish was a male beta "fighting" fish. (The only thing it will fight with is another male beta.) We used to catch flies and toss them into the tank, whereupon the beta would swim over and gulp it down. During the winter, small amounts of ground meat replaced the flies. After a while of doing this, the beta would swim over to the side of the tank whenever I approached. He would accept dead flies or ground me from my fingers, and even let me gently stroke his sides. But only for me. My parents would not receive the same acceptance. I believe that silly fish loved me! I think that the key lies in what we call experience. As we mature, we undergo the slow implanting of our conglomerate memories and associations. In order to produce a HAL, it would have to undergo a "growing up" process. And there are so many variables that it staggers the mind, and surely would tax the brains of several billions of pro
Agree with most of things you written. As for neurons my humble but definitive opinion: they are only a DISH/net capturing vibrations from the command center and other sources outside the human entity i.e. this neuron net is used as operational RAM it doesn't represent the power of a given person but his ability to process according to those strings/nets (a marionette play), same goes for all living things (all is alive as far as I know). Old news (around 2006) at http://www.netezza.com/releases/2006/release111606.htm[^]: [ Netezza’s system uses a data-intensive rather than traditional compute-intensive approach. Netezza’s data management appliance, a massively parallel data analytic system, was evaluated by searching a massive semantic graph used to determine relationships between objects by storing nodes (the object) and how they link to other objects (an edge). Semantic graphs are an important technology for analyzing relationships in large data sets. The graph contained 300 billion edges and 11 billion nodes, the largest known graph search to date. Computer scientists ran level-set expansion, and bi-directional, breadth-first search against the data. More than 90 percent of the searches were completed in less than five minutes, demonstrating Netezza’s ability to scale for managing and analyzing the largest data sets. These types of calculations have traditionally been conducted on very large and complex systems, but can now be executed on an appliance that is easy to deploy, maintain and use. “We’re looking for simplicity and scalability,” said John Johnson, a LLNL computer scientist. ] It is obvious that all computational approaches suck without a super-vast data-set. Again, I am stunned how come that with all computational power we currently possess we (the INTERNET users) don't have even a dummy (not AI-class) English phrase checker?! What is the problem? My rough estimation says: 5 to 10 billion word1-word2-word3-word4 quadruples, the rest orders likewise. Thanks, fun is important yes.
Get down get down get down get it on show love and give it up What are you waiting on?
-
Agree with most of things you written. As for neurons my humble but definitive opinion: they are only a DISH/net capturing vibrations from the command center and other sources outside the human entity i.e. this neuron net is used as operational RAM it doesn't represent the power of a given person but his ability to process according to those strings/nets (a marionette play), same goes for all living things (all is alive as far as I know). Old news (around 2006) at http://www.netezza.com/releases/2006/release111606.htm[^]: [ Netezza’s system uses a data-intensive rather than traditional compute-intensive approach. Netezza’s data management appliance, a massively parallel data analytic system, was evaluated by searching a massive semantic graph used to determine relationships between objects by storing nodes (the object) and how they link to other objects (an edge). Semantic graphs are an important technology for analyzing relationships in large data sets. The graph contained 300 billion edges and 11 billion nodes, the largest known graph search to date. Computer scientists ran level-set expansion, and bi-directional, breadth-first search against the data. More than 90 percent of the searches were completed in less than five minutes, demonstrating Netezza’s ability to scale for managing and analyzing the largest data sets. These types of calculations have traditionally been conducted on very large and complex systems, but can now be executed on an appliance that is easy to deploy, maintain and use. “We’re looking for simplicity and scalability,” said John Johnson, a LLNL computer scientist. ] It is obvious that all computational approaches suck without a super-vast data-set. Again, I am stunned how come that with all computational power we currently possess we (the INTERNET users) don't have even a dummy (not AI-class) English phrase checker?! What is the problem? My rough estimation says: 5 to 10 billion word1-word2-word3-word4 quadruples, the rest orders likewise. Thanks, fun is important yes.
Get down get down get down get it on show love and give it up What are you waiting on?
"5 to 10 billion word1-word2-word3-word4 quadruples, the rest orders likewise." What happens if a given phrase is or is not found? Do these word1-2-3-4 quads exhaust the English language? What do you mean by "the rest orders likewise"? It still remains that the human brain will out-perform silicon at this point in technology, at least. My belief is that we're missing something, some key understanding or technology. This is not to say that it won't materialize someday. Willie said it best: "There are more things in Heaven and Earth than are dealt with in your philosophy, Horatio." What we need is some new philosophy!
-
"5 to 10 billion word1-word2-word3-word4 quadruples, the rest orders likewise." What happens if a given phrase is or is not found? Do these word1-2-3-4 quads exhaust the English language? What do you mean by "the rest orders likewise"? It still remains that the human brain will out-perform silicon at this point in technology, at least. My belief is that we're missing something, some key understanding or technology. This is not to say that it won't materialize someday. Willie said it best: "There are more things in Heaven and Earth than are dealt with in your philosophy, Horatio." What we need is some new philosophy!
Despite my semi-naive understanding of this matter I have firm confidence in brute-force i.e. exhaustive approaches. That is why I mentioned IBM's nifty approach utilizing huge data-sets not simply relying on some ala-bala algorithms/heuristics. >What happens if a given phrase is or is not found? The dummy phrase checker will report rank X rank 0 respectively. >Do these word1-2-3-4 quads exhaust the English language? Almost, which is enough for the goal chased - to give the usage up-to-some-date. Of course when one needs 5-word phrase order 5 is the skeleton whereas orders 1,2,3,4 and 6,... form the flesh. >What do you mean by "the rest orders likewise"? Rest (useful or rather most needed) orders are 1,2,...,9,... I meant the n-grams regardless of order and number of entries are no different from each other. Here different approaches exist: dummy brute-force ranking, graphs, ... My intent is to traverse the easiest first. Ranking is the most interesting thing (for me), it would allow very useful output: not just as in example below tutting_my_heel_and but all similar (to what degree is another thing) phrases like: you_tutting_at_us, tutting_about_dangerous_sports, a_loud_tutting_noise, ... reporting the number of occurrences for all lower orders like: tutting 320 tutting_my 8 tutting_my_heel 6 tutting_my_heel_and 2 I cannot explain here the full picture but it is very close in appearance to family tree - I want to create the first such x-gram English language dictionary, my desire is there all major phrases to be crucified - I see each phrase as an easy-to-the-eyes big-table. I am not afraid at all from this formidable task because I rely on my ignorance, it is my primal beta-tester and friend not as one would expect the intelligence. Ignorance when used as feedback is awesome source of inspiration and superb way to explore/calibrate problematic etudes. My English is inferior and broken forever, this helps me a lot during the modeling - instead of handicap it is my advantage. Some links if you are interested: http://www.allthelyrics.com/forum/learning-english-language/116652-english-360-from-angles.html[^] At link below I tried to analyze the usage of 'tutting' in English in attempt to answer the next quetsion:
-
Despite my semi-naive understanding of this matter I have firm confidence in brute-force i.e. exhaustive approaches. That is why I mentioned IBM's nifty approach utilizing huge data-sets not simply relying on some ala-bala algorithms/heuristics. >What happens if a given phrase is or is not found? The dummy phrase checker will report rank X rank 0 respectively. >Do these word1-2-3-4 quads exhaust the English language? Almost, which is enough for the goal chased - to give the usage up-to-some-date. Of course when one needs 5-word phrase order 5 is the skeleton whereas orders 1,2,3,4 and 6,... form the flesh. >What do you mean by "the rest orders likewise"? Rest (useful or rather most needed) orders are 1,2,...,9,... I meant the n-grams regardless of order and number of entries are no different from each other. Here different approaches exist: dummy brute-force ranking, graphs, ... My intent is to traverse the easiest first. Ranking is the most interesting thing (for me), it would allow very useful output: not just as in example below tutting_my_heel_and but all similar (to what degree is another thing) phrases like: you_tutting_at_us, tutting_about_dangerous_sports, a_loud_tutting_noise, ... reporting the number of occurrences for all lower orders like: tutting 320 tutting_my 8 tutting_my_heel 6 tutting_my_heel_and 2 I cannot explain here the full picture but it is very close in appearance to family tree - I want to create the first such x-gram English language dictionary, my desire is there all major phrases to be crucified - I see each phrase as an easy-to-the-eyes big-table. I am not afraid at all from this formidable task because I rely on my ignorance, it is my primal beta-tester and friend not as one would expect the intelligence. Ignorance when used as feedback is awesome source of inspiration and superb way to explore/calibrate problematic etudes. My English is inferior and broken forever, this helps me a lot during the modeling - instead of handicap it is my advantage. Some links if you are interested: http://www.allthelyrics.com/forum/learning-english-language/116652-english-360-from-angles.html[^] At link below I tried to analyze the usage of 'tutting' in English in attempt to answer the next quetsion:
"I've been walking in the same way as I did And missing out the cracks in the pavement And tutting my heel and strutting my feet." Well, when it comes to songs and poetry, it's pretty much a matter of "it means what you think it means." Same with art. My guess: "I've been walking as I always have And avoiding the cracks in the pavement And clicking my heel and walking pompously." My best wishes for your success.
-
Despite my semi-naive understanding of this matter I have firm confidence in brute-force i.e. exhaustive approaches. That is why I mentioned IBM's nifty approach utilizing huge data-sets not simply relying on some ala-bala algorithms/heuristics. >What happens if a given phrase is or is not found? The dummy phrase checker will report rank X rank 0 respectively. >Do these word1-2-3-4 quads exhaust the English language? Almost, which is enough for the goal chased - to give the usage up-to-some-date. Of course when one needs 5-word phrase order 5 is the skeleton whereas orders 1,2,3,4 and 6,... form the flesh. >What do you mean by "the rest orders likewise"? Rest (useful or rather most needed) orders are 1,2,...,9,... I meant the n-grams regardless of order and number of entries are no different from each other. Here different approaches exist: dummy brute-force ranking, graphs, ... My intent is to traverse the easiest first. Ranking is the most interesting thing (for me), it would allow very useful output: not just as in example below tutting_my_heel_and but all similar (to what degree is another thing) phrases like: you_tutting_at_us, tutting_about_dangerous_sports, a_loud_tutting_noise, ... reporting the number of occurrences for all lower orders like: tutting 320 tutting_my 8 tutting_my_heel 6 tutting_my_heel_and 2 I cannot explain here the full picture but it is very close in appearance to family tree - I want to create the first such x-gram English language dictionary, my desire is there all major phrases to be crucified - I see each phrase as an easy-to-the-eyes big-table. I am not afraid at all from this formidable task because I rely on my ignorance, it is my primal beta-tester and friend not as one would expect the intelligence. Ignorance when used as feedback is awesome source of inspiration and superb way to explore/calibrate problematic etudes. My English is inferior and broken forever, this helps me a lot during the modeling - instead of handicap it is my advantage. Some links if you are interested: http://www.allthelyrics.com/forum/learning-english-language/116652-english-360-from-angles.html[^] At link below I tried to analyze the usage of 'tutting' in English in attempt to answer the next quetsion:
You haven't commented on my "interpretation". Any ideas why your n-gram process failed with "tutting"? What about "stutting"?
-
You haven't commented on my "interpretation". Any ideas why your n-gram process failed with "tutting"? What about "stutting"?
Please excuse me I thought that I am too talkative and you had enough of me, I rarely find people to match my chatter-boxeness. I think your version is as literal as possible - I myself see the verse as you. >Any ideas why your n-gram process failed with "tutting"? What about "stutting"? At next posts I crucified i.e. explored-fully the usage of 'tutting': http://www.allthelyrics.com/forum/learning-english-language/38909-questions-about-english-language-and-grammar-95.html#post876443[^] The goal then/there was not to do stemming. 'Stutting' is a new word to me, it is to be analyzed tonight. Please tell me where is the failure, as for my current (it is not updated for a year or so, grmbl) n-gram procedure it is still in its infancy. 'Strutting' is well-established compared to 'tutting', even I knew its usage. I salute you with a song (sung by a very talented young artist): Adam Lambert - If I Had You http://www.youtube.com/watch?v=wmXQFwlD7vk&feature=related[^]
And I'm workin' my strut but I know it don't matter
All we need in this world is some loveIn this cool video you can see the strut in action. I love the next song, I salute everyone who holds the romantic view: Adam Lambert - Sleepwalker (Glam Nation Live) http://www.youtube.com/watch?v=TZACqNnO9-E[^]
I can't turn this around
I keep running into walls that I can't break down
I said I just wander around
With my eyes wide shut because of you
I'm a sleepwalker walker walkerMore than less I too am a sleepwalker, kind of fool - I recommend you to check how profound is the etymology of 'fool' at http://www.etymonline.com/index.php?term=fool[^]. It will be very useful to see you
-
Please excuse me I thought that I am too talkative and you had enough of me, I rarely find people to match my chatter-boxeness. I think your version is as literal as possible - I myself see the verse as you. >Any ideas why your n-gram process failed with "tutting"? What about "stutting"? At next posts I crucified i.e. explored-fully the usage of 'tutting': http://www.allthelyrics.com/forum/learning-english-language/38909-questions-about-english-language-and-grammar-95.html#post876443[^] The goal then/there was not to do stemming. 'Stutting' is a new word to me, it is to be analyzed tonight. Please tell me where is the failure, as for my current (it is not updated for a year or so, grmbl) n-gram procedure it is still in its infancy. 'Strutting' is well-established compared to 'tutting', even I knew its usage. I salute you with a song (sung by a very talented young artist): Adam Lambert - If I Had You http://www.youtube.com/watch?v=wmXQFwlD7vk&feature=related[^]
And I'm workin' my strut but I know it don't matter
All we need in this world is some loveIn this cool video you can see the strut in action. I love the next song, I salute everyone who holds the romantic view: Adam Lambert - Sleepwalker (Glam Nation Live) http://www.youtube.com/watch?v=TZACqNnO9-E[^]
I can't turn this around
I keep running into walls that I can't break down
I said I just wander around
With my eyes wide shut because of you
I'm a sleepwalker walker walkerMore than less I too am a sleepwalker, kind of fool - I recommend you to check how profound is the etymology of 'fool' at http://www.etymonline.com/index.php?term=fool[^]. It will be very useful to see you
For what it's worth, the given usage of "tutting" and "strutting" are incorrect. "Tut" is a word that dictionary.com defines as: tut [pronounced as an alveolar click; spelling pron. tuht] interjection, noun, verb, tut·ted, tut·ting. interjection 1. (used as an exclamation of contempt, disdain, impatience,etc.) 2. for shame! noun 3. an exclamation of “tut.” ================ Hence, the phrase "tutting my heel" has no literal meaning in English, because "tut" is an example of onomatopeia: Onomatopoeia (also spelled onomatopœia , from Greek: ονοματοποιΐα) is a word or a grouping of words that imitates the sound it is describing, suggesting its source object, such as"click", "bunk", "clang", "buzz", "bang", or animal noises such as"oink", "moo", or "meow". The word is a synthesis of the Greek words όνομα (onoma, = "name") and ποιέω (poieō, = "I make" or "Icreate") thus it essentially means "name creation", although itmakes more sense combining "name" and "I do", meaning it isnamed (and spelled) as it sounds (e.g. quack, bang, etc.). Onomatopoeic words differ across languages because they always have to conform to some extent to the broader linguistic system they are part of. Thus the Norwegian tikk takk for the sound of a clock could never be a Dutch word because Dutch words never have long consonants at the end of the word; accordingly, the Dutch equivalent is tik tak . ============= Similarly, "strutting my feet" is technically wrong since "my feet" is redundant; "strutting" already means "moving my feet pompously." What this all boils down to is something I mentioned previously, namely that poetry, song lyrics and sometimes prose do not necessarily adhere to the letter of English conventions. Poetry and lyrics rely on such things as meter and rhyme at the expense of the normally accepted and defined rules of word order and association. For these reasons, I firmly believe that n-grams do not and cannot "define" words. They can only provide what has already been written that includes the given words. The "meaning" of a particular word must begin with it accepted dictionary meaning. From there its meaning in a particular context might be inferred. But even this inference is not guaranteed, because all human languages contain idioms, which are words used together to mean something often unrelated to the accepted meaning of the words involved. In English we can say, "He was hot under the collar", which means he was angry. Or, "She was spitting tacks", mean
-
For what it's worth, the given usage of "tutting" and "strutting" are incorrect. "Tut" is a word that dictionary.com defines as: tut [pronounced as an alveolar click; spelling pron. tuht] interjection, noun, verb, tut·ted, tut·ting. interjection 1. (used as an exclamation of contempt, disdain, impatience,etc.) 2. for shame! noun 3. an exclamation of “tut.” ================ Hence, the phrase "tutting my heel" has no literal meaning in English, because "tut" is an example of onomatopeia: Onomatopoeia (also spelled onomatopœia , from Greek: ονοματοποιΐα) is a word or a grouping of words that imitates the sound it is describing, suggesting its source object, such as"click", "bunk", "clang", "buzz", "bang", or animal noises such as"oink", "moo", or "meow". The word is a synthesis of the Greek words όνομα (onoma, = "name") and ποιέω (poieō, = "I make" or "Icreate") thus it essentially means "name creation", although itmakes more sense combining "name" and "I do", meaning it isnamed (and spelled) as it sounds (e.g. quack, bang, etc.). Onomatopoeic words differ across languages because they always have to conform to some extent to the broader linguistic system they are part of. Thus the Norwegian tikk takk for the sound of a clock could never be a Dutch word because Dutch words never have long consonants at the end of the word; accordingly, the Dutch equivalent is tik tak . ============= Similarly, "strutting my feet" is technically wrong since "my feet" is redundant; "strutting" already means "moving my feet pompously." What this all boils down to is something I mentioned previously, namely that poetry, song lyrics and sometimes prose do not necessarily adhere to the letter of English conventions. Poetry and lyrics rely on such things as meter and rhyme at the expense of the normally accepted and defined rules of word order and association. For these reasons, I firmly believe that n-grams do not and cannot "define" words. They can only provide what has already been written that includes the given words. The "meaning" of a particular word must begin with it accepted dictionary meaning. From there its meaning in a particular context might be inferred. But even this inference is not guaranteed, because all human languages contain idioms, which are words used together to mean something often unrelated to the accepted meaning of the words involved. In English we can say, "He was hot under the collar", which means he was angry. Or, "She was spitting tacks", mean
Thanks for sharing. I disagree with your analisys. >... the phrase "tutting my heel" has no literal meaning in English ... Already has if you ask me. Don't you think that Adele is coining one new definition (if you like) by using 'tutting' without carrying already widely accepted meanings? Here I rewrite my answer: "I think the tutting sound is just as the clicking one, here tutting is a transitive verb causing the heel (woman's are with very small surface and hard which produces the cha-cha like sound) to hit the ground loudly." > ... English conventions ... For me there is no such thing. Or at least they are for teachers - I am not and never will be. >... namely that poetry, song lyrics and sometimes prose do not necessarily adhere to the letter of English conventions. You are the one who sees them as separate categories, here lie our different apprehensions. >... I firmly believe that n-grams do not and cannot "define" words. Yes they provide the context (or rather the environment of a word i.e. its adjacent words) as next (simplified/chunked) stage (they are building blocks of sentences). Your statement is similar to 'sentences cannot "define" words.' Of course, but by taking all (yes all) sentences you get all the definitions included - they just are not deciphered/explained. >They can only provide what has already been written that includes the given words. Yes yes, that is their purpose. You say 'only' as if this is something of litlle or inferior value - in my world it is the very base of EVERYTHING, dictionaries are pillars not foundation. >... "strutting" already means "moving my feet pompously." Wrong AFAIK. I checked 4 dictionaries - no trace whatsoever. How did you pull out this? Can you give a sentence or definition supporting this? >The "meaning" of a particular word must begin with it accepted dictionary meaning. From there its meaning in a particular context might be inferred. Double yes. As for idioms nothing new here, the second yes rules them. >So poetry and lyrics do not necessarily utilize the literal meaning of the words involved. I don't know why you said that! Obviously we have different views. And here comes a light-stepping approach of mine: My favorite dictionary 'The American Heritage Dictionary of the English Language' says: [ stutter intr. & tr.v. stuttered, stuttering, stutters To speak or utter with a spasmodic repetition or prolongation of sounds. n. The act or habit of stuttering. [Frequentative of dialectal stut, from