That's why function templates were created in C++. All of the compile time features without adding overhead or degradation in execution speed. When I am coding in C++, I *rarely* use/utilize macros. Which isn't to say that macros, properly implemented aren't useful. Take, example, the need to "stringize" a set of characters. You can't really beat: #define STRINGIZE(x) #x
:) One more thing... too often, macros are used to provide some way to represent a value, as opposed to using a const
variable declaration. One should eschew this: #define PI 3.1415927f
in favor of this: const float PI = 3.1415927f;
Stacy Dudovitz
Posts
-
This, this right here... is why people hate macros... -
The Zig Saga..."Per my understanding, if I used the C++ standard libs I'd have to link to the C++ library as well, in conjunction with the C libs." The only time I now use C/C++ is in embedded work. That generally means running on some kind of RTOS, and more to the point, limited flash RAM. I mention this because, at least in the embedded world using a complier like Eclipse (or similar), I can and often need to fine tune which libraries I link with. As a general rule in embedded development in C++, we typically eschew the standard library in favor of speed and size. That means grabbing specific code from the Standard Library for code like the example I gave above. In fact, it is much more common to grab the source for specific calls and just include the source code file, rather than linking to a whole library. Example: I have my own collection of open source implementations of function calls like
itoa
,printf
,strcat
, etc. In the Windows/Mac/Linux world, I don't think there is really any appreciable difference even if you bring in other libraries. Note: It's a good thing to keep in mind that if you require the entire library to deploy, the compiler doesn't link to the whole library, but rather only links to the necessary code it references. What that means practically, is depending on what library calls you make/link to, the performance hit is often negligible if you choose to use the Standard Library from the compiler as opposed to bringing in your own source as needed. Equally important to remember is that when you use the STL part of the standard library, no such limitation exists. Everything required is generated at compile time, and there is no linking to any external code. So to summarize, if you use the C++ compiler just as a better C compiler, AND you don't make calls into the library that are not implemented as templates, you get the best of all worlds. :) -
The Zig Saga...So, curious question... even if you are trafficking in .c files, why aren't you using a C++ compiler to compile your C code? Even in embedded environments, C++ is preferred to C. If you must strictly conform to C++ only as better C, and avoid polymorphism, exception handling and RTTI, you are still left with one very powerful tool at your disposal - templates! Using stupid template tricks, you can easily strip your number of the thousands separator, or indeed, any delimiter such as the '-' (dash) in sweet Jenny's phone number. It might look something like this:
#include <iostream>
#include <string>
#include <algorithm> // For std::remove_if// Function to remove thousands separators from a formatted number
std::string removeThousandsSeparators(const std::string& formattedNumber)
{
std::string result = formattedNumber;// Remove any non-digit characters (e.g., commas) result.erase(std::remove\_if(result.begin(), result.end(), { return !std::isdigit(c); }), result.end()); return result;
}
Here is an example of how it might be used:
int main()
{
// Example usage
std::string formattedInput = "1,234,567"; // Input with thousands separators
std::string strippedNumber = removeThousandsSeparators(formattedInput);std::cout << "Formatted input: " << formattedInput << std::endl; std::cout << "Stripped input: " << strippedInput << std::endl; return 0;
}
If you dislike the idea of having to quote your value, you can also add a pre-processor macro to handle that as well:
// Macro to stringify an argument
#define STRINGIZE(x) #xYou can then use it like so:
auto strippedNumber = removeThousandsSeparators(STRINGIZE(1,234,567));
Keep in mind, all of this is done at compile time, so there is no performance penalty. When there's a will, there's a relative. :-D :laugh:
-
User interfaces from he!!It's been my experience that engineers who use some form of 'beep' are the same engineers who insist that source code needs no comments i.e. everything you write is self documenting, and so it is with everything you 'beep'. :rolleyes:
-
Any bets on when Copilot gets added to Notepad?If you're dead set on the old behavior, here's how to change the multi-tab mode AND the auto save state: Press ALT+S for Settings. There is an option that says when notepad starts... Select open new window, rather than open previous session. This returns the old behavior, including when you press ALT+F4 it will ask if you want to save changes. You can also disable tabs by choosing choose where files go when opened, and selecting "new window". Enjoy! :)
-
Visual Basic 6 did this, why can't you, C#?Allowing fall through code was/is a huge source of bugs, and does not fall in line with best practices. During any refactoring, it's not uncommon to move/reorder code. Without close scrutiny, a break in logic flow would be so easily introduced by missing the fall through code. Worse, refactoring/reordering code might be necessary, but now fallthrough code has introduced an artificial constraint. For case statements with few case points, this requirement is a minor inconvenience at best. If, however, you have numerous case points, this might be indicative of a code smell. Such code is both difficult to understand and to follow. Perhaps a different approach might be more appropriate... finite state machines, keyword/lambda method dictionaries, etc. Something to consider when designing well crafted code that adheres to best practices.
-
Any bets on when Copilot gets added to Notepad?So I take it you've never used Notepad++? Same stuff, different devs.
-
Visual Basic 6 did this, why can't you, C#?Allowing fall through code was/is a huge source of bugs, and does not fall in line with best practices. During any refactoring, it's not uncommon to move/reorder code. Without close scrutiny, a break in logic flow would be so easily introduced by missing the fall through code. Worse, refactoring/reordering code might be necessary, but now fallthrough code has introduced an artificial constraint. For case statements with few case points, this requirement is a minor inconvenience at best. If, however, you have numerous case points, this might be indicative of a code smell. Such code is both difficult to understand and to follow. Perhaps a different approach might be more appropriate... fi ite state machines, keyword/lambda method dictionaries, etc. Something to consider when designing well crafted code that adheres to best practices.
-
There are times when I think the world is rather mad.They give you just enough cable to hang yourself with... :laugh: :laugh: ;P
-
(Can I talk about UI?) Column headers vs content alignmentWhy not allow it to be configurable? Default perhaps to center aligned, and right click mouse button to select left/center(/right?). To really answer this question, best to get feedback from QA/alpha/beta testers methinks?
-
What's y'all's favorite way of dealing with floating point precision?I was a bit alarmed :omg: by your reply and solution below: For this project, I'm in JavaScript/TypeScript and dealing with money. So there is no decimal type. But, after this chat I decided to just add two extra decimal places of resolution. So, I'll store a currency amount as 1.1234 and only round it off to 2 during reporting. There are two possible solutions: 1) If you are always/only going to traffic in money, a more robust solution would be to use integer math and display formatting. As an example, the value '$1.23" would be stored in an integer of sufficient size to house the min/max dollar value you wish to traffic in. Using RegEx, it would be trivial to strip off the '$' and '.', yielding a value of the price offset by a factor of 100. To display, you could use simple formatting. You can store the values as-is to a data store, or, if you require marshaling of values, divide the value by 100 by first casting the value to float and dividing by 100f. In this case, I would use Number or BigInt. A quick search on the largest integer type gives the following results: The biggest integer type in JavaScript is BigInt. It was introduced in ECMAScript 2020. BigInts can store integers of any size, while the Number type can only store integers between -(2^53 - 1) and 2^53 - 1. 2) You could incorporate decimal.js into your project, which will provide you with the decimal type you seek. You can find that here: https://mikemcl.github.io/decimal.js/#:~:text=decimal.-,js,available in the console now. Whichever way you choose, I would implore you NOT to add arbitrary/additional numbers to the right of the decimal place. It will come back to bite you! :((
-
What's y'all's favorite way of dealing with floating point precision?After decades of writing software for industrial, medical, financial and LoB (Line of Business) applications, I found that the following guidelines work: 1) Financial and money, I always use the decimal type for currency, and the smallest sized type that affords me the precision I need. So why would I use any other type in a currency/money app? Simple example: I'm writing a trading application, where the strike price will be stored in a decimal type, and the number of shares will be store in a float type. Why not use a decimal type for the number of shares? Because there's no guarantee that it will be 3 places to the right of the decimal (that's typical, but not a hard fast rule). I chose float because its the smallest type that offers the precision I seek. By smallest I mean that a double is typically twice the size of a float. For those that are tempted to respond that floats are 64 bits, and doubles are 128 bit, not necessarily. That's a very PC centric view. Note: These guidelines typically, but not always, apply to LoB 2) For medical and industrial, which usually require floating point precision to store values that may not be the same as the formatting to the display, I use floats and doubles, using the smallest type that affords the precision required by the application under development. What do I mean by the smallest type and precision? The size of the type refers to how large does the floating point type have to be in order to maintain the level of precision (the number of places after the decimal point) while not losing appreciable loss to rounding and implicit conversions (more on that below). Caveats: There are several other considerations when choosing and writing floating point code. A) Rounding loss: This refers to how precise a resulting value is after some operation is performed on it. This is not limited to mathematical operations only (multiplication, division), this also applies to any library calls used to generate a new value e.g. sqrt(...). B) Conversions: Be very very careful about mixing types i.e. decimal, float and double. When a smaller type is promoted to a larger type, it may introduce random "precision" that actually makes the new value deviate farther from the mean i.e. the new value strays farther from representing the true value. So for example:
float pi = 3.1415927;
float radius = 5.2;
double circumference = 2.0f -
Do you even bother with tech books anymore?It depends... With the advent of StackOverflow and sites like this, coupled with the decreasing quality of mainstream tech books, I found them to be a waste of money. But... When it comes to more narrow topics e.g. writing kernel drivers or data pipelining, that's where the tech books shine, if you can find one on the narrow subject you seek. Gone are the days of tech books for the sake of the craft, such as Andrew Schulman's Undocumented Windows and its ilk. How I miss those days... Also, I used to love going to tech bookstores to browse books. Not only mainstream store like B&N and Borders, but more focused stores like the McGraw Hill bookstore on 6th Avenue in Manhattan. Sigh...
-
Closures (C#)It has to do with capture, where we grab the value of
i
by the use of assignment inton
before launching another task. There is nothing magical about using a variable namedn
... we could have called itgeorge
. What's important is that the current value ofi
in thefor
loop is captured before launching the task. If we had not done that, the for loop would complete execution, settingi
to 11 before the first task was launched. That is what is meant by closure. -
MFC? WinForms? I gotta ask... why?That's actually context dependent which is more difficult. Even the most basic Forms and Custom Controls, which can be a requirement in even the simplest data entry/LOB applications, are far more difficult to write and implement in WinForms than in WPF. One of the very core strengths, which is also its weakness vis-à-vis learning curve, is the emphasis on reusability. Control templates, data templates, triggers, etc. are very powerful paradigms that are very difficult to implement in WinForms, and lead to overly complicated code.
-
MFC? WinForms? I gotta ask... why?For those that require many calls to the OS, I found that writing COM objects in C++ to access the OS is far easier than PInvoke calls. You access the COM objects through .Net RCWs. Just watch out to correctly implement IDisposable on classes that host your COM/RCW objects, especially if you are managing unmanaged resources.
-
MFC? WinForms? I gotta ask... why?Just to add one point to Tom's reply, Tom and I worked together back in the day on a very large WPF Point of Sale application for Grainger. Given that it was a very dynamic UI with no specific fixed "form", WPF was (and is) the logical choice for this LoB application. That being said, when I worked at Citibank back in the very early 90's, we needed a very similar (in idea) type of dynamic UI presentation, and Win32 was the only game in town (MFC was only in alpha at the point). If anyone from that era remembers, dialog boxes and its ilk were created in the Dialog Editor, produced .RC files, and had to run through a Resource Compiler. We made that application sing, and it would rival any WinForms or WPF LoB application today. It was, in a word, a thing of beauty, all written entirely in C. When you're a software developer, you have to mix in a little magic into the mix! ;) :)
-
MFC? WinForms? I gotta ask... why?Could you explain point #3 -- what you mean that WPF does not support native development? If you are referring to APIs such as Win32 (or any DLL with Exports), that's what PInvoke is for, and is GUI agnostic. COM and COM InterOp is also fully supported.
-
MFC? WinForms? I gotta ask... why?Lots of good answers, and from my 4+ decades of experience as a professional developer (now retired), I suspect its a combination. So, allow me to weigh in with some of my own thoughts: My experience includes Win16/Win32 APIs (C), MFC (C,C++), WinForms (C#) and WPF (C#, Framework and Core). While I agree with the sentiment that one must keep moving forward in a professional setting (i.e. clients as the end users), I also have endured the pain of being unable to upgrade because of client budget, incompatibilities, and developer culture. In addition, much of the move from C/C++ to .NET and across GUI platforms was made both easier and more difficult with COM, remoting, COM InterOp (RCWs/CCWs), as well as XML and JSON. When I was heavily invested in COM, I used WTL (Windows Template Library) for my GUIs when working in C++. WTL cannot replace MFC (it was never meant to), but for most applications it can hold it's own. Here's the thing... I still love C/C++, it's often the only choice for me when I worked on embedded projects (think Eclipse IDE, etc.). That being said, C# is soooo much easier to work with, is in many ways more powerful and is as performant. That leaves me wondering why C/C++ for a Windows GUI project. When it comes to which GUI platform in the C# world, WinForms "strength" is also it's weakness. If you have ever tried to implement components, or have to copy/reuse/resize a form and its subsequent components, you know how incredibly difficult that can be. Add in language support and it becomes a night mare. On the other hand, WPF is indeed a steep learning curve, but what you trade for in return is extensibility and full theming and language support in XAML through control and data templates, styles, etc. Once you use and master WPF, there's no going back! :) One could punt and use WPF in a minimalist way by using a Grid control and just plop controls in fixed positions. One could avoid learning a large part of WPF, but that's like buying a Ferrari only to transport people around by dragging it behind a tow truck. So I do get it's context dependent, such that there are still a great many developers that choose alternate paths. Sometimes there's no compelling reason when, for example, supporting a legacy application that no longer is viable to port or upgrade. I also do defend the right of hobbyists to pick their poison. I still love, now again, to fire up my Trash80, if for no other reason, than to see if ChatGPT ports to Z-80 really work! ;P :laugh: One group notabl
-
MFC? WinForms? I gotta ask... why?I've noticed a trend here on the site where authors/developers are still writing GUI applications in MFC (for C++) and/or WinForms. What's noticeably absent is WPF in either Framework or .NET Core. So I pose the question (this is not snark, I genuinely want to know) why authors/developers are still choosing old(er) technology? I used to think it was the learning curve, but I suspect there's more to this story. Bonus Question: What's also noticeably absent is VB6 and VB.NET. Have those platforms truly bitten the dust (for good)?