Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
T

trønderen

@trønderen
About
Posts
2.5k
Topics
70
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • When programming was (still) fun
    T trønderen

    Every Fortran function could have a flag indicating 'busy', to be reset upon return. It could be the return address: Zero it upon return, and if it is non-zero upon entry, then you have made a recursive call.

    If you want to know at compile/link time: A flow analysis tool could follow all possible flow paths, graymarking every function on the current path being analyzed. If extending the path by one more call, and it hits a graymarked entry, then you have a direct or indirect recursion.

    I don't know what old (pre '90) compilers did; these are jutst my thoughts about how it could be done. I suspect that a lot of implementations simply said: Oh, so the program crashed? Well, you broke the rules, it is your own fault!

    The Lounge

  • This is so cool (absolute time sink!)
    T trønderen

    Gee, I didn't know that that large fraction of all music made includes a modern drum set, an electrical guitar and bass, and has a 4/4 time signature.

    The Lounge

  • When programming was (still) fun
    T trønderen

    The Fortran I used in 1978 did allow recursion. Neither compiler nor machine was mainstream, but a proprietary 16-bit mini. Its developers were more or less "fresh from school", they had learnt the academics of compilers for languages like Algol, Simula etc. and, I was told, never learned to handle function calls without a stack. So handling recursion came at no extra expense. They didn't know how to prevent it. :-)

    I do believe that the first BASIC I used in 1975, on a Univac 1110 mainfram, allowed recursion - but the manual is buried so deep down in the basement that I will not spend the time to verify it.

    The Lounge

  • Save water: delete your emails!
    T trønderen

    How much water would be saved if I throw away all my archived paper mail? Does it make a difference if the mail is handwritten / written with an old style typewriter, or a printout of an electronic mail?

    Regardless of how the letter was produced, I cannot understand how and why my paper mail archive cabinet consumes more water by holding a higher number of letters.

    Or ... Now that I come to think of it: Why would my backup disk - offline, of course, as any proper backup should be - consume more water if the bits stored on it forms a number of email entries, compared to the bit pattern left by a proper erase program that fills disk pages being with a shredding pattern?

    If the disk pages are not overwritten by a shredding pattern but simply linked over to the disk's freelist, how much would the disk drink then? As much as before mail cleanup, or reduced to the amount you could expect if they were filled with a shredding pattern?

    The Lounge

  • Encoding rationale
    T trønderen

    @Mircea-Neacsu: the first byte of the encoding indicates the number of expected continuation bytes.

    Well, if it was so ... If a byte is the first of two, three, four bytes: Yes. If it is the first of one byte: No. When an argument of 'expected continuation bytes' holds sometimes, but not all, the argument fails. (With Western text, the rule is broken regularly.)

    @Mircea-Neacsu : less robust for initial sync if you are "eavesdropping"

    Making it simpler for an eavesdropper to break into a communication to synchronize is rarely a primary design criterion for an encoding design ...

    Besides, it makes little difference if you drop one, two or three bytes with '10' in the uppper bits, or if you drop one byte with '1' in the uppper bit. If there are two of those, then you know that you have the entire character code (the longest valid one is two bytes with 1 at the top plus the following one with 0 at the top), like in UTF8 where a byte with '110'. '1110' or '11110' at the top tells you that you are at the start of the character.

    OK, so I see that there are 'arguments' in favor of the UTF8 design. But I do not accept that 'Any argument is a good argument'. I do not see any of the arguments presented as 'good design arguments', whether it is to make eavesdropper's syncing easier, a rule about trailing bytes that sometimes holds although not in the most common case, nor error robustness where in the most common case 7 out of 8 random bit errors go unnoticed.

    There was a (pre-internet) network named 'Bitnet', 'bit' being an acronym for 'Because It's There'. That's a really strong argument in favor of UTF8: It is there, and at least for web pages, it seems to be capable of clearing the ground, getting rid of the umpteen squared competing alternative encodings. Let's hope that it spreads to all computer applications, not just web pages.

    Success is not equivalent to design or engineering equivalence (just look at internet ...), but a less-than-perfect design is much to prefer over the complete chaos. UTF8 is a prime example.

    I am none-PC in that I want to be well aware of the weaknesses as well as the strengths of the tools I am using. I guess I am well aware of UTF8 weaknesses. The reason for my initial question was intented as search for true strenghts that I have overlooked. It seems that there are not much to speak of.

    Nevertheless, I will continue to advocate UTF8. Because it is there, and that is far better than a comple chaos.

    Thanks to all for the comments you have made!

    The Lounge

  • Encoding rationale
    T trønderen

    Is the robustness against errors based simply on the logic that are about 4000 invald codes for every valid one, so random bit pattern is unlikely to be a 'valid' code point, in other words: That any encoding using a small fraction of the code space? Or is there something particular to the way UTF8 does it?

    As long as you stick to ASCII text, i.e. single octet UTF8 encoding, the error must hit the upper bit to make an invalid code; 7 of 8 bit flips will go unnoticed. Even if the upper bit is flipped, the code may end up as a 'valid' multibyte code point.

    About 30 years ago (read: when technology was less developed than today), I was at a presentation of Frame Relay. One guy in the audience questioned the end-to-end checksum verification, replacing X.25 hop-by-hop verification. The speaker responded with a grin: In modern fiber networks, there are no transmission errors! (Then he moderated the statement somewhat: The error frequency is so low that even if a frame must be retransmitted all the way when it happens, it is much cheaper than hop-by-hop checking.)

    I can't think of any other data type where the choice of encoding is affected by error robustness consideration; take IEEE 754 as an example. The common strategy is to protect not each individual element, but to add robustness by collecting a number of them into a block that is augmented by a (data type independent) checksum or error correction value.

    Do you happen to know a link to a UTF8 encoding discussion that might enlighten the error protection argument?

    (A comment to @Mircea-Neacsu : Even though I compare UTF8 to a simpler encoding, I did mean to propose it as a UTF8 replacement; it was just for comparison purposes, to learn the true rationale behind UTF8. I am very much against the proliferation of alternatives we have almost everywhere in the programming world, and would seriously want UTF8 to be the single external coding of all text!)

    The Lounge

  • Encoding rationale
    T trønderen

    A simple "Uppermost bit means 'There is more'" has all these advantages, plus it saves space compared to UTF8: Four times as many characters can be coded in two octets where UTF8 requires three, and all remaining characters can be coded in three octets, where UTF8 requires four octets for any character from 0x10000 and up.

    The second point applies to the simpler coding scheme as well.

    The third point, about mixing languages freely, is a function of using a larger (21 bit) character code. Any encoding capable of encoding 21 bit values can handle it.

    I still wonder why the UTF8 designers chose such a complex encoding scheme, when a much simpler one would satsify "all" requirements. Or: All requirements that I can see. There must be other requirements that I do not see.

    The Lounge

  • Encoding rationale
    T trønderen

    (I didn't find a suitable programming forum for this questeion, as it is not about programming :-))

    Years ago, I read arguments for the chosen UTF-8 encoding that I remember as convincing and rational.

    Several standards encode variable length integers in a simpler way: 7 bits in each octet, the upper bit set in all but the last octet.
    Reading/decoding: If bit 7 of first octet is set, set destination to -1, otherwise (or if value is unsigned) to 0. Repeat: Shift desitination value 7 bits left; add next byte AND 0x7F; until byte AND 0x80 = 0.
    Writing/encoding: 0/sign-extend value to multiple of 7 bits. From top bits: If next run of 8 bits (i.e. the the 7 to potentially be stored plus the sign bit of the next group) are all identical (all 1 or all 0), skip to next group. Otherwise, Loop overremaining 7-bit groups: If not last group, AND 0x80. Store as next octet.

    One argument (e.g. in Wikipedia) in favor of the UTF8 encoding is that if you jump right into an octet sequence, finding the start of the character is simple. But it is no more difficult searching for an octed with the top bit reset.

    Using the 0x80 bit as a 'There is more!' flag allows all 21 bit Unicode characters to be encoded in three octets, rather than UTF8's four. Eight times as many character codes would fit in two octets. The format can handle 32, 64, 128 bits and any other integer length.

    I vaguely remember (from long ago) convincing arguments making me think that UTF8 designers did the right thing. I just can't recall those arguments. Can anyone guide me to a place to find them? (Or repeat them here!)

    At the momentI am unable to see why the simple 'Upper bit means there is more' would be not just as good, or in some respects: better, than the UTF8 encoding.

    The Lounge

  • Non-US letters
    T trønderen

    I am "impressed", observing that in the year 2025, software still ingores non-US letters. In the porting process, my nick has changed from "trønderen" to "tronderen". Appearently, Norwegian letters are accepted in message bodies, at least at editing time - we'll see if they display correctly at read time: æøå ÆØÅ.

    Hopefully, the limitation to 7-bit ASCII was limited to the porting too. But even a porting tool should have known in the year 2025 that the world retired 7 bit ASCII <i>many</i> years ago.

    The Lounge

  • Variables
    T trønderen

    To add even more: Fpr class objects (instances), the compiler adds another "semi-hidden" field to hold a reference to the class definition object. The class definition is present at run time, and may be interrogated. The class definition is the type of the class instances. Second: A debugger knows the types of even primitive values. That is because the compiler (optionally) writes a lot of metadata to a separate file, or to separate sections of a linkable / executable file, called 'debug information'. The debug information is not loaded into RAM during normal execution. When a debugger is in control, when given an address in RAM, it can look up in the debug info what type of variable (or whatever) is located on that address. The lookup is not direct; interpreting the debug info is not a task for beginners!

    Religious freedom is the freedom to say that two plus two make five.

    C / C++ / MFC performance question

  • Learning C# or really .net, confirm for me my suspicions
    T trønderen

    A small remark slighly on the side: I never before heard dotNet IL (Intermediate Language) referred to as "P-code". The term "P-code" was made famous by the Pascal compiler of 1970 (or thereabouts), generating code for execution on a virtual machine. Writing an interpreter (i.e. the virtual machine) for the Pascal P-code was quite simple; there were interpreters for dozens of machine architectures. The version that became prominent was called P4; it was the 4th version of P-code for the original Pascal compiler. I even believe that some people replaced the microcode of the LSI-11, the single chip version of the PDP-11 architecture, with alternative microcode to interpret P4 directly without the need for an interpreter - but if my memory is correct, this solution was slower than the interpreter running on the LSI-11 with the original microcode and an interpreter on top :-) P-code, or "bytecode" in general, is designed to be treated as the instruction set of a virtual machine, executed "as is". dotNet IL is certainly not meant for execution as is on a virtual machine. I will not say that it is impossible, but it most certainly is pointless and extremely inefficient. IL is not a P-code or bytecode, but "source code" for the jitter - the Just In Time code generator, translating IL into native machine code for whatever CPU the jitter is run on, immediately before execution. This probably is not at all significant to the OP. Maybe he will later get in touch with dotNet IL. When/if that happens, he should not believe that IL is interpreted the way P-code is. End remark: P-code/bytecode interpretation is certainly slower than native code, in some circumstances a lot slower. With C# - and all other dotNet languages - having the jitter generating native code, it had a speed advantage over Java, which used to interpret Java bytecode. You could compile Java to native code from the beginning, or at least very early, but then you got a binary that couldn't execute on any other architecture, and couldn't utilize e.g. optional instructions available on a specific machine. Java byte code could be moved freely around to any machine with an interpreter. To be able to compete with C# on speed, the JVM (Java Virtual Machine) was extended with the capability of rather than executing the byte code as is, it started out by processing the byte code further, into native code for the current machine. This improved the execution speed significantly. However, Java bytecode wasn't designed for this use, and a

    The Lounge csharp com help question c++

  • Drivers
    T trønderen

    Calin Negru wrote:

    I’m going to stop bugging you with my questions, at least for now.

    Don't worry! It is nice having someone to ask questions so that the responder is forced to straighten out things in his head, in a way to make it understandable. As long as you can handle somewhat lengthy answers, it is OK with me! :-) When you get around to ask questions about networking, there is a risk that I might provide even longer and a lot more emotional answers. I am spending time nowadays to straighten out why the Intranet Protocol has p**ed me off for 30+ years! When I do that kind of things, I often do it in the form of a lecturer or presenter who tries to explain ideas or principles, and must answer to questions and objections from the the audience. So I must both get the ideas and principles right, and the objections and 'smart' questions. That is really stimulating - trying to understand the good arguments for why IP, say, was created the way it was. (It has been said that Albert Einstein, when as a university professor got into some discussion with graduate students, and of course won it, sometimes told the defeated student: OK, now you take my position to defend, and I will take yours! ... and again, Einstein won the discussion. If is isn't true, it sure is a good lie!)

    Religious freedom is the freedom to say that two plus two make five.

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    Calin Negru wrote:

    Is a machine instruction a 32 bit word/variable?

    This is something I have been fighting since my student days! :-) What resides in a computer isn't "really" numbers. Or characters. Or instructions. Saying that an alphabetic character "really is stored as a number inside the machine" is plain BS! RAM, register and whatever else holds bit patterns, period. Not even zeroes and ones, in any numeric sense. It is charged/uncharged. High/low voltage. High/low current. On/off. Not numbers. When a stream of bits comes out of machine, we may have a convention for presenting e.g. a given sequence of bits as the character 'A'. That is a matter of presentation. Alternately, we may present it as the decimal number 49. This is no more a "true" presentation than 'A'. Or a dark grey dot in a monochrome raster image. If we have agreed upon the semantics of a given byte as an 'A', claiming anything else is simply wrong. The only valid alternative is to treat the byte as an uninterpreted bit string. And that is not as a sequence of numeric 0 and 1, which is an (incorrect) interpretation. A CPU may interpret a bit sequence as an instruction. Presumably, this is also the semantics intended by the compiler generating the bit sequence. The semantics is that of, say, the ALU adding two registers - the operation itself, not a description of it. You may (rightfully) say: "But I cannot do that operation when I read the code". So for readability reasons, we make an (incorrect) presentation, grouping bits by 4 and showing as hexadecimal digits. We may go further, interpreting a number of bits as an index into a string table where we find the name of the operation. This doesn't change the bit sequence into a printable string; it remains a bit pattern, intended for the CPU's interpretation as a set of operations. So it all is bit patterns. If we feed the bit patters to a printer, we assume that the printer will interpret them as characters; hopefully that is correct. If we feed bit patterns to the CPU, we assume that it will interpret them as instructions. Usually, we keep those bit patterns that we intend to be interpreted as instructions by a CPU separate from those bit patterns we intend to be interpreted as characters, integer or real numbers, sound or images. That is mostly a matter of orderliness. And we cannot always keep a watertight bulkhead between those bit patterns intended for text o

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    In principle, a driver is very much like another method / function / routine or whatever you call it. It may be written in any language. For all practical purposes, that is any compiled language - for performance/size reasons, interpreted languages are obviously not suited. There is one requirement for bottom level drivers: The language must, one way or the other, allow you to access interface registers, hardware status indicators etc. If you program in assembler (machine code), you have all facilities right at hand. In the old days, all drivers were written in assembler, without the ease of programming provided by medium/high level languages for data structured, flow control etc. So from the 1970s, medium level languages came into use, providing data and flow structures, plus mechanisms for accessing hardware - e.g. by allowing 'inline assembly': Most commonly, a special marker at the start of a source code line told that this line is not a high level statement but an assembly instruction. Usually, variable names in the high level code is available as defined symbols for the assembler instructions, but you must know how to address it (e.g. on the stack, as a static location etc.) in machine code. The transition to medium/high level languages started in the late 1970 / early 1980s. Yet, for many architectures / OSes, with all the old drivers written in assembler, it was often difficult to introduce medium/high level languages for new drivers. Maybe there wasn't even a suitable language available for the given architecture/OS, of which there was plenty in those days. So for established environments, assembler prevailed for many years. I guess that some drivers are written in assembler even today. If the language doesn't provide inline assembler or equivalent, you may write a tiny function in assembler to be called from a high level language. Maybe the function body is a single instruction, but the 'red tape' for handling the call procedures make up dozen instructions. So this is not a very efficient solution, but maybe the only one available. Some compilers provide 'intrinsics': Those are function-looking statements in the high level language, but the compiler knows them and does not generate a function call, but a single machine instruction (or possibly a small handful) right in the instruction flow generated from the surrounding code. E.g. in the MS C++ compiler for ARM, you can generate the vector/array instructions of the CPU by 'calling' an intrinsic with the name of the

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    Calin Negru wrote:

    How does the Operating System perform I/O operations? Does he talk directly to the Hard Drive or is the communication mediated by the motherboard?

    You most definitely go via the motherboard! The OS talks to the chipset, which talks to some I/O-bus - for a disk, it is typically SATA, USB or PCI-e. These present an abstracted view of the disk, making all disks look the same, except for obvious differences such as capacity. At the other (i.e. the disk) end is another piece of logic, mapping the abstract disk onto a concrete, specific one (i.e. mapping a logical address to a surface number, track number, sector number), usually handling a disk cache. In modern disks, the tasks are so complex that they certainly require an embedded processor. USB and PCI-e are both general protocols for various devices, not just disks. Sending a read request and receiving the data, or sending a write request and receiving an OK confirmation is very much like sending a message on the internet: The software prepares a message header and message body according to specified standards, and pass it to the electronics. The response (status, possibly with data read) is received similar to an internet message. Writing a disk block to an USB disk (regardless of the disk type - spinning, USB stick, flash disk, ...) or writing a document to an USB printer is done in similar ways, although the header fields may vary (but standards such as USB define message layouts for a lot of device classes so that all devices of a certain class use similar fields). All protocols are defined as a set of layers: The physical layer is always electronics - things like signal levels, bit rates etc. The bit stream is split into blocks, 'frames', with well defined markers, maybe length fields and maybe checksum for each frame (that varies with protocol); that is the link layer. There may be a processor doing this work, but externally, it appears as if it is hardware. Then, at the network layer, the data field in the frame is filled in with an address at the top, and usually a number of other management fields. For this, some sort of programmed logic (an embedded processor, or dedicated interface logic) is doing the job - but we are still outside the CPU/chipset. The chipset feeds the information to the interface logic, but doesn't address the USB or PCI-e frame as such, or the packet created (within the link frame) by the network layer. Both USB and PCI-e defines

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    There is almost as many answers to this question as there are motherboards :-) Or at least as there are CPU chip generations. Over the years, things have changed dramatically. In very old days, I/O signals could come directly from, or go directly to, the CPU pins. Extra components on the board were essentially for adapting the physical signals - RS-232 signals could go up to +/- 25V, which won't make the CPU chip happy. Gradually, we got support chips that e.g. took care of the entire RS-232 protocol, with timings, bit rate, start/stop bits etc. handled by a separate chip. Similar for the 'LPT' (Line PrintTer) parallel port; it was handled by a separate chip. The CPU usually had a single interrupt line - or possibly two, one non-maskable and one maskable. Soon you would have several interrupt sources, and another MB chip was added: An interrupt controller, with several input lines and internal logic for multiplexing and prioritizing them. Another chip might be a clock circuit. DMA used to be a separate chip. For the 286 CPU, floating point math required a supporting chip (287). Other chips had the memory management (paging, segment handling etc.) done by a separate chip: Adding the MMS chip to the MC68000 (1979) gave it virtual memory capability comparable to the 386 (1985). Not until 68030 (1987) was the MMS logic moved onto the main CPU chip. There were some widespread "standard" support chips for basic things like clocking, interrupts and other basic functions. These were referred to as the chipset for the CPU. We still have that, but nowadays technology allows us to put all the old support functions and then some (quite a few!) onto a single support chip, of the same size and complexity as the CPU itself. Even though it is a single chip, we still call it a 'chipset'. Also, a number of functions essential to the CPU, such as clocking, memory management, cache (which started out being separate chips) were moved onto the CPU chip rather than being included in 'the chipset chip'. You can view the chipset as an extension of the CPU. You may call it the 'top level' MB chip, if you like. In principle, it could have been merged into the main CPU, but for a couple of reasons, it is not: First, it acts as a (de)multiplexer for numerous I/O-devices, each requiring a number of lines / pins. The CPU already has a huge number of pins (rarely under a thousand on modern CPUs, it can be up to two thousand). The CPU speaks to the chipset over an (extremely) fast connection, where it can send/receiv

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    There were video cards a long time ago - that is 40 to 50 years - where you did not select the scan rate from a table of predefined horizontal and vertical frequencies, but you set the frequencies by absolute values. If you set one of them to zero (usually by forgetting to set it, or setting it failed for some reason and you didn't handle it immediately), the electron beam wouldn't be moving at all but remain steady in one position. If you set one but not the other, the electron beam would continuously redraw a single scan line or column. The phosphor on the screen surface is not prepared for being continually exited at full strength, and it would "burn out", loose its capacity to shine. So you would have a black spot, or a black horizontal or vertical line across the screen. This damage was permanent and could not be repaired. This is so long ago that I never worked with, or even saw, a video card that allowed me to do this damage to my screen. I do not know what that standard (if it was a standard) was called. My first video cards were 640x480 VGA, heavily stressed in the marketing that you could not destroy your screen whatever you sent to it. So the memory of these 'dangerous' video cards where still vivid in the 1980s (but I do believe that VGA's predecessor, EGA, was also 'safe'). Related to this "burn out" danger, was the "burn in": Everyone who had a PC in the 1990s remember the great selection of screen savers back then: After a few minutes of idleness, a screen saver would kick in, taking control of the screen, usually with some dynamically changing, generated graphics. Some of them were great - I particularly remember the plumbing, where every now and then a teapot appeared in the joints: We could sit for minutes waiting to see if any teapot would pop up :-) These screen savers did save your screen: If you left you PC and screen turned on when leaving your office (or for your home PC, when going to bed) with the background desktop shining, the bright picture elements in icons, status line or whatever, would slowly "consume" the phosphor in those spots. After some weeks or months, when you slide a white background window over those areas, you might see a shadow of your fixed location icons and other elements in what should have been a pure white surface. The image of the desktop was "burnt into" the screen. Note that this has nothing to do with your display card, driver or other software: The signal to the screen is that of a pure white surface; the screen is incapable of dis

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values). Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles). When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.

    Religious freedom is the freedom to say that two plus two make five.

    Hardware & Devices tutorial question lounge

  • Drivers
    T trønderen

    Digital circuits are not by definition clocked, if you by that mean that there is a central clock setting the speed of "everything". Circuits may be asynchronous, going into a new state whenever there is a change in the inputs to the circuit. Think of a simple adder: You change the value on one of its inputs, and as soon as the voltages have stabilized, the sum of the two inputs are available at the output. This happens as quickly as the transistors are able to do all the necessary switching on and off - the adder doesn't sit down waiting for some 'Now Thou Shalt Add' clock pulse. You can put together smaller circuits into larger ones, with the smaller circuits interchanging signals at arbitrary times. Think of character based RS-232 ("COM port"): The line is completely idle between transmissions. When the sender wants to transfer a byte, he alerts the receiver with a 'start bit' (not carrying any data), at any time, independent of any clock ticks. This is to give the receiver some time to activate its circuits. After the start bit follows 8 data bits and a 'stop bit', to give the receiver time to handle the received byte before the next one comes in, with another start bit. The bits have a width (i.e. duration) given by the line speed, but not aligned with any external clock ticks. Modules within a larger circuit, such as a complete CPU, may communicate partly or fully by similar asynchronous signals. In a modern CPU with caches, pipelining, lookahead of various kinds, ... not everything start immediately at the tick. Some circuits may have to wait e.g. until the value is delivered to them from cache or from a register: That will happen somewhat later within the clock cycle. For a store, the address calculation must report 'Ready for data!' before the register value can be put out. Sometimes, you may encounter circuits where the main clock is subdivided into smaller time units by a 'clock multiplier' (PCs usually have a multiplier that creates the main clock by subdividing pulses from a lower frequency crystal; the process can be repeated for even smaller units), but if you look inside a CPU, you should be prepared for a lot of signal lines not going by the central clock. The great advantage of un-clocked logic is that it can work as fast the transistors are able to switch: No circuit makes a halt waiting for a clock tick telling it to go on - it goes on immediately. The disadvantage is that unless you keep a very close eye on the switching speed of the transistors, you may run into synchroniz

    Hardware & Devices tutorial question lounge

  • Ah the joys of late stage optimization
    T trønderen

    Oh'Really?

    Religious freedom is the freedom to say that two plus two make five.

    The Lounge performance graphics algorithms architecture code-review
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups