Before C there were lots of higher level assembly languages (Jean Sammett wrote in the 70-ies, may be even late 60-ies, a thick book with on the cover the tower of Babel. I myself used assembler (PDP-8, PDP-9) until I ported BCPL to the PDP-9, later using BCPL on and for the PDP-11 with cross compilation for the P860 (a small Philips 16 bit computer with obly papertape in and output). I actually wrote a lot of software in BCPL, including parser generators and a compiler for Algol 60 on the PDP-11 It was in app 1978 that we got Unix on a PDP-11 and obtained the original C Book
Member 12982558
Posts
-
IT history -
Okay, without starting a religious argument - best linux distro for development?I develop SDR (software defined radio) software (DAB (Digital Audio Broadcasting) in the old TV band III and DRM (Digital Radio Mondiale) on shortwave) since I retired, all on Fedora. Ubuntu is as fas as I can see the most simple one to install and it has a large user base The reason I have chosen for Fedora is the great support for cross compilation to Windows. Ubuntu is used by me for generating AppImages of the packages (AppImages are kind of containers) Ubuntu is easy to use, however, the releases do not contain the most recent versions of the various packages. For both Ubuntu and Fedora I use the default GUI, it is not windows like and that suits me well
-
If you could run all your apps (games too) on Linux?The first version of Linux that I used was on floppy discs (Soft Landing Systems) on a 386 machine in '92 (Kernel version less than 1) Since 2009 I write SDR software (C++, Qt framework) on Linux, there was a question to have it for Windows as well. So, using the excellent cross compilation facilities with M ingw64 on Fedora, I cross compile the software - and test is on a Windows box. After a while I decided to run windows and Linux on the same development laptop, using dual boot. I (almost) only use windows for testing my stuff. Someone complained about photoshop not running on Linux, I use gimp for that, use LibreOffice for office like things. One thing I am missing on Linux and that is the photoprinter software that goed with the canon inkjet printer. Once - just as an exercise - I installed mingw64 on Windows to be able to compile my applications locally, but using the tool(chains) on Linux works for me much easier. Interesting observation is that on average running the applications on Windows takes up to 5 times more CPU power than running them on Linux. While Qt has a "Qt-creator" I'm too old for that stuff so I am using a command window with manually given commands. Vim is the editor, qmake and cmake are the makefile generators, always generate dusiting development with the sanitizer libraries linked in and - if needed - debug with gdb. Applications are reasonably sized, somewhere between 40000 and 50000 lines of C++ and they support a varietyy of SDR devices. I also wrote a few plugins for SDRuno, an SDR framework that only runs on Windows. The SDRuno used nana for the GUI issues, so my plugins use it as well. I had to surrender to Microsoft MSVC and while the plugins work, I really dislike the MSVC environment. While the error messages are (more or less) reasonable, I really dislike the behaviour of the toolsset, it feels like a big brother that knows everythinh better than the programmer and is eager to take over control. I use fedora, it offers by far the best cross compilation facility for windows. The only drawback of using Fedora is the speed with which new releases are prepared: once per half year. Updating to the newest version is rather simple though. I use Ubuntu, always an older version, in a VM for creating AppImages (kind of containers for Unix) of the applications. With Windows I have problems (apart from using the MSVC) a. whenever I am in a hurry, the system starts updating and shouts "do not switch off the computer" b. The dependency of the dif
-
Software Development: The Great EqualizerWell, my story is a little boring, I also grew up, went to school and university. I am always glad that I learned at University a lot about mathematics, I am using Fourier transforms quite often and even Laplace. Wrt computers (computer science?) I learned about the PDP-8 and PDP-9, programming in assembler code, using DECtapes for storage. While I am not using PDP-8 or PDP-9 instruction sets nowadays, I really believe that it helps me in my programming. What I further learned was some language theory (type X grammars, 2VW grammars, attribute grammars etc etc), typical things you best learn when you are young. Now from time to time I even use these formalisms to structure my programs. I am fully aware of the fact that after my university education I could write programs but essentially could not program. In the early 70-ties I wrote some parser generators (LL and LALR) and a few compilers (one for Algol 60) and to put it mildly: with my current experience I would have written it differently. Nevertheless, for writing these programs I needed some math, though not calculus. But these programs had a size such that one starts to think about structuring the code and the development process (the language of the 70-ies was for me BCPL). After the 80-ties with Unix and C, I ended up as manager. The last 20 years of my working career I was involved in management, and there were days that I did not use a fourier or Laplace transform of though about formal verification of program (fragments) :-D . After my retirement I started programming again at a level that - at least what I think - would have been impossible without some formal training and some experience in my younger years. My current domain is software defined radio, and there is quite some math in my programs. Summarizing, writing good code is not something you learn from a book, but a slightly more formal training may make it easier to understand what code is good, why it is good, and what code smells
-
What's wrong with Java?Depends on what you what domain you are working in. I wrote a DAB (digital Aufio Broadcasting) decoder in C++ and simplified versions - just as a programming exercise - in Ada and in java. The type of program of such a decoder requires extensive interaction with libraries written in C (to name a few, device handling, fft transforms, and aac decoding). In my personal opinion, binding java structures to C libraries is a crime. btw the Ada binding is simpler since I was using the Gnat compiler system, but even then ... Java is just a language, it is not my choice, but for many applications it seems more or less OK. Personally I do not like the GUI handling, but that is probably a matter of taste. The misery with binding to non-java (read: C) libraries is such that I would not recommend it for applications where one depends on that kind of libraries (I dislike all kinds of so-called integrated environments such as IntelliJ or whatever, right now I am writing some stuff where I (more or less) have to use VS as development environment. It is probably my ignorance, but I absolutely dislike the destruction of the formats I use in my coding, and the error messages are a horror. For me the command line tools such as vim, qmake, mae and the GCC suite - with gdb as debugger under Linux - are the ideal development tools)
-
Thought for the agesThe very first "larger" (i.e. 4000 instructions) program I wrote was a TRAC interpreter on the PDP-9 in assembler language (I had some experience with writing assembler programs on the PDP-8). (TRAC, Text Reckoning and Compiling was an interpreted language, essentially a big macro expander, it was popular in the late 60-ies and earliy 70-ties, search for Calvin Moors for details). As it happened, the PDP-9 had an excellent debugger -named SCALP - and since the program contained lots of pointers I really needed this debugger from time to time (the PDP-9 had just one accumulator, no further registers). Ever since that time I told students - I kept working in academia - to pay attention to debugging and I gave some demonstrations. But giving a "course" in debugging, no. It is too dependent on the project being debugged, so the basics are that you can interrupt the processing (breakpoints), inspect registers and execute step by step. Of course on the PDP-9 step by step means executing single instruction, while with e.g. the current gnu debugger (I must admit, I develop under Linux) provides lots more possibilities. As a side note: one of the nice "features" of the PDP-9 was that you could reduce the clock speed, and - when you were experiences - could follow the execution of the program on the lights on the control panel. Came in handy when your program was looping. Conclusion: yes, debugging should be taught, but preferably not in a class room someone explaining all the debugger commands on a blackboard. Guided experience is needed here.
-
I am appalledhere - Netherlands - most people wear masks in shops, however a. they do not wear the mask with the nose covered, b. they do not keep any distance anymore. This morning in the local supermarket I was picking something from a lower shelf and the guy (seemed a schoolboy to me) who (tried) to fill in the higher shelves bended over me!! Even since people are obliged to wear a mask you see that distance is not respected anymore! (but maybe there are other countries where people behave better)
-
Curious...UCSD Pascal was an integrated system, running on a variety of Z80 and 6502 based system. Pretty good system! Not too fast, using a P-code as "VM". So, one could do better than Basic
-
What IDE is your choice for C/C++ project?I develop - as hobby - sdr software. The "toolset" I am using is a gcc/g++ toolchain (mingw for cross compilation) b. the gdb debugger c. CMake and qmake as Make generators and Make d. vi(m) for all editing e. latex for creating documents I work under Linux, develop for Linux, cross compile for Windows (using the mingw64 toolchain) and RPI (but that is Linux of course) I experimented with VS on Linux: horrible I experimented with QCreator on Linux: less horrible, but completely useless. I do use a lot of Qt stuff though, and I am willing to use the qt designer to prepare widgets, although the simpler ones are just in coding. The point is: who is in control, and those fancy IDE's seem to know things better than I do, they seem to enforce all kinds of decisions and want me to follow their rules. I'm old enough to know better!!!! I really do not like that, so I am in full control of the software and its development. (my current (main) project consists of over 100 files, it supports 6 to 8 different input devices, the project comprises about 50000 lines of code, so, yes it is toy project since it is hobby, but no it is not toy project when looking at the size and complexity).
-
The learning rushI was trained in the late 60-ies and early 70-ties, and then - of course - doing computers and computer science was an adventure. Developments were going fast, languages were being developed and explored. Compiler construction was a real challenge then. The last 20 years of my professional life I was in a more managerial role, buto boy, you can learn a lot then! about people, politics and organizations. After retirement I picked up programming again, and it is really different from the days of working on a PDP-9 or 11. But, as long as there is a clear view of what I want to make (I am afraid it is more the technical stuff) I really enjoy learning about Fourier, Laplace Javascript, PHP , C++ and .... So, yes I understand what you are saying, and I think it is a wonderful attitude
-
Programming languages - fun vs. disciplinedIn de 60-ies there was of course Algol 60, I did a numerical analysis course (later on I did a compiler on/for the PDP-11, and much later on a compiler Algol -> C) Algol 60 never was popular, but according to people like Hoare and Dijkstra: "Algol 60 is a great improvement to its successors". The language was the basis for many other languages, of course Simula and Algol 68, but also Pascal. In the early 70-ies there was BCPL (i.e. Basic CPL), an - at that time - a quite popular systems implementation language (the 70-ies were the years of an explosion of languages) Of course, BCPL was the grandfather of C. BCPL was portable, we ported it to a PDP-9 and a Philips P860, and while not very popular there days, it still exists. Personally, I learned programming in assembler on a PDP-8 and PDP-9, by the time we started using PDP-11's we first had BCPL and later - with the arrival of Unix - the C language. My favorite language was Ada, at least the 83 and 95 versions. The language was based on the though that the readability of programs was/is essential: it still is. The language I am using is a (subset of) C++, the reason is simple: it is available on Linux and there are decent cross compilation facilities for Windows. For the applications I write, I need some performance, while in a sense it is real time processing (i.e. SDR applications), it is not hard real time. C++, when used with caution is a decent language, with good compilers, both on Linux x86/64, Windows and RPI type systems just my 2 cts
-
Heck yeah! Free insightInteresting Can you elaborate a little on how to decide when to activate one (or more) of the (sub?) parsers? Of course you can activate a number of parsers simaultaneuously on a given input (sub)string Long, long time ago in one of the compilers we used a mix of bottom up and top down (recursive descent) parsers, is that (more or less) similar to what you are describing?
-
One thing I do like about linuxI am using a dual boot system with Windows/Linux since the early '90-ties. The usual setup is that I make one "shared" nfs partition where my development stuff is on, accessible from within Linux, and accessible from Windows. Most of my software uses Qt I do a lot of cross compiling for Window, Mingw64 is an excellent vehicle and is well supported on my Fedora part of the system. On Windows I am also using Mingw, since using Qt with VS is a crime. Wrt to VIM: I'm not sure what the level of education is needed for handling Vim, I always thought it was kindergarten level.
-
Well, another week over...well, when I was working/playing in the field of compilers (long time ago), some set theory was used, however, when designing the different algorithms for the various treewalks (lots of tree walks and list processing) at least one eye was on the big O. Especially for one pass load and go compilers speed was essential (and at that time computers were not that fast) but optimizations were more related to clever programming than algorithms (e.g. the design of a set representation strongly depends on what you want to do with the set and its elements). In my current (hobby) workingarea, software defined radio, both calculus and discrete mathematics are needed. Processing samples - with lots of fourier transforms and some laplace transformations for the filters - is basically calculus oriented. Of course one needs to look at performance: you do not want to miss too many samples, but handling performance is also here more an engineering issue than an algorithmic issue. Translating samples to bits and handling bits is - seen from a math position - different, viterbi decoding is a major component, as is Reed Solomon decoding, the latter using algebra (group theory). But also here, the final performance largely depends - next to selecting decent algorithms - on clever programming and decent engineering. But fortunately, for most of these "math" components there exists libraries (and one can write its own), But, the math, both the calculus and the discrete math, are basically fun to understand and it is always a learning experience to write a library component for it.
-
from the earlier daze in Silicon Valley to my own daze as jaded (?) 'powerless user'One thing is certain, the size of software grew with at least the same speed as that of the hardware, and not always for the best The first Algol 60 compiler I wrote (yes, using my own parser generator) ran in less than 16 K on a 32K PDP-11 (early 70-ies), it was written in BCPL. No "fancy" stuff like IDE's that think they know what you want, just a simple editor (cannot remember the name of the editor, it was probably something like ed on RT-11, well before Unix came) Nearly 20 years ago (I did professionally nothing with computers at that time), I wrote in spare time an Algol 60 -> C translator, it still runs but the executable takes over 700 KByte (I know the size is nothing compared to that of a C compiler). The current application I am working on (hobby, something with SDR) when packed as a Windows installer takes nearly 60 Mbyte, without dll's it takes 13 MByte) although I must admit that the signal processing (2048000 samples/second, with app an FFT per msec could not have been done on a PDP-11), but the size increase of applications is dramatically Wrt quality: In the 70-ties there was this belief that - on average - each 100 lines of source code would contain (at least) an error, I belief that currently that is worse, imagine then a 100 times larger program ..... I do not deal with IDE's, they think they know what I want, and if I try to express that do not want it they more or less enforce it on me. Actually, for me that is the main reason not to try a language like C# since there does not seem to be single a tutorial that describes the language without forcing you to install some crappy IDE. When programming, I want to be in full control, so separate editors, compilers, debuggers is what I need, and therefore I'll stay with Linux. But of course that is different from using the computer as administrative vehicle, then I want indeed to say things like: find me my wedding photos, call the plumber to repair the faucet in the bathroom
-
Generation, what's left?The main diffence is related to access to resources In 1970 I wrote a couple of assembler programs on and for the PDP-8 and PDP-9 and there was virtually no one that you could ask a question In the end of the 70-ties, we hacked Unix kernels and there were some local user groups to share ideas. Note that the PDP-11 had an address space of 64K, and lots of effort went in optimizing the use of limited resources. On an RP02 disk putting the free blocks on tracks such that the amount of waiting time was limited increased performance tremendously (of course adding an overlay structure to user programs to overcome the limited address space caused a decrease in performance. Neverthess, on a PDP-11/70 with a whole (i.e. 1) MByte of memory, two RL02 disks and an RP03 disk we ran a student lab for 30 to 40 students simultaneously, with a link to the university's mainframe In the 70-ies and 80-ies - when writing compilers - we exchanged ideas with others using - hard to believe now - regular mail. Sending a draft report from europe to australia with additions and corrections being sent back took several weeks (sometimes more than a month), until email arrived (around 82 or 83). The basic facts that nowadays you do not have to worry about memory resources (recall, the PDP-8 has 4K 12 bits words, and for assembling a program you had to load the assembler from papertape), not to worry about storage capacity, and communication is now (almost) instant. Ask a question and in 10 minutes (seconds sometimes) you have an answer!!. On the other hand, the domain grew. In the '60-ies and -70-ties you hardly had programming languages, knowing one or two languages, having some familiarity with a computer, and being able to use some vague terms you was considered a specialist in the whole field. Now you have to specialize in front end technologies, backend technologies, middleware specialists, cloud guru, etc etc, while specialists in different subdomains do not understand each other!! As an example, look at the reports of the codewitch, while in the 70-ties (most of the) stuff she writes about was part of any decent undergraduate program, I bet that 90 % of the CP population does not understand the technolgies she is applying. (I take that as example, since in the 70-ties and 80-ies I worked in the field of compilers and happen to understand this part of computer science)
-
Musings on generalized and context sensitive parsingIndeed the A68 description was person-made, not mechanically. It would be interesting though to see with what degree of constraints it would be possible to handle it mechanically. The most simple variant is of course an underlying LL(1) grammar with predicates (or general functions) between symbols in the rules, predicates that solely depend on the left context. However, as soon as you are able - what you are - to generate a multitude of parse trees you can evaluate constraints in a later stage and delete the trees where the predicate results are false. The question is of course not whether or not it is theoretically possible but whether or not you can formulate (levels of) constraints to make it practical. For AG's is is pretty well known how to limit them such that the attributes can be computed is a given number of scans over the tree, so given a finite amount of trees and a finite amount of scans per tree to see whether or not the tree is viable, extracting the valid tree is - in principle - solvable. Assuming in natural language processing the basic elements to parse and interpret are sentences, that should not give to many problems. I do not know much about natural language processing, I do know though that most sentences can be parsed in (many) different ways depending on the context. Context here in a very broad sense! (Due to sloppiness by most speakers the sentences spoken are inherently ambiguous and require knowledge of the background of the speaker to be able to extract the precise intention of the spoken words.) My examples would be in Dutch, so probably not very meaningful to you Anyway, good luck with you parser (parsing) development, although I am not very active in that field anymore, I'll keep an eye on your progress I
-
Musings on generalized and context sensitive parsingwell there is of course a long history of trying to parse natural languages, and a variety of extensions to CFG's to handle that, usually in the form of attributing grammars in whatever form. One form would be to add predicates to the grammar where the predicate is used to select the direction of the parse (this is basically why one prefers a handwritten recursive descent parser, since you can mix predicates with symbol recognizers) You might want to have a look at 2VW grammars (2 level van Wijngaarden grammars), they were used in the description of Algol 68. They are not "just" attribute grammars, since in most of the common attribute grammar description the parsetree is annotated and the attribute grammar is constrained such that in a finite - and preferably precomputed - number of passes the attributes can be computed. 2VW grammars do contain (kind of) predicates that allow (or forbid) certain derivations. Sine you seem to like exotic parsing techniques, handling some form of generalized attributes during the building of the parsetree seems an exercise to keep you busy for a couple of {weeks | Months | years} (select one)
-
State Machines - my brain won't do what I want it toA bank account is an excellent example of a state machine, since it can be viewed on different levels. For the bank your account is on one of two states: you have either money or a dept. For yourself you might add states like a. positive saldo b. overdraft (i.e. nearing the end of the month) c. too much for a regular bank account: need to transfer to a saving account So, dependent on the view you take you can identify a few states your account is in
-
LALR vs LR parsingshift/reduce parsing: the difference between LALR (1) and LR (1) is that LALR is based on an LR (0) automaton. Operationally they are the same: shift when you have to shift, reduce when you have to reduce and give an error message when there is a mismatch between the set of expected tokens and the received token