Mainly it grows because every async method has a state machine created by the compiler. Yes, partial async is a contradiction and at best leads to the same thread-blocking as fully synchronous code, while the usually worst outcome is deadlock. The fact that async is an all-or-nothing commitment is IMO the main reason for developers not embracing it, but there's no excuse to not use it for greenfield projects. If you do it right, you also pass a CancellationToken with every call; increasingly, I'm also becoming convinced of passing IProgress, at least for certain public members. There was recent interest in replicating the Go concept of green threads, which I think would have been a game changer had it been used instead of async/await. Green Thread Experiment Results · Issue #2398 · dotnet/runtimelab · GitHub[^]
Andre_Prellwitz
Posts
-
Delegates and I am so glad to leave MS behind -
What's y'all's favorite way of dealing with floating point precision?Thousands of calculations per second still shouldn't take any time. Come back when you're talking billions and we can start to use SIMD or even WebGPU to do some heavy lifting. Meanwhile you need to ask stakeholders when the rounding may happen. It's pretty standard that individual transactions have tax rounded, for example, and it must stay consistent. There are multiple strategies for rounding, including even or odd rounding, and it shouldn't be chosen at random. Don't forgo correctness because of some desire for speed that's ill-placed.
-
So...what's .NET doing, exactly?The code is compiled to IL, just not JIT-compiled to machine code. NGEN does this ahead of time (AoT) to improve startup performance at the expense of disk space, assuming disk reads are much quicker than JIT compilation. This also saves time and bandwidth transferring only the IL from Microsoft.
-
I hate recent C# versions!There is often a trade-off between succinctness and clarity. It's not always about "saving a few keystrokes"; sometimes it's about removing nonessential details, or better expressing intent, or allowing a developer to (literally) see the whole picture, or increase speed of understanding. Other times we see features added that have proved valuable in other languages or platforms.
-
code sexiness questionYou’re not wrong, but you’re also not necessarily right. Coding preferences should be agreed upon, codified, and automated to avoid debate. The important thing is that there’s consistency. Calling a coding practice “lazy” (like the use of ‘var’) is condescending, at best, and smells of arrogance. It’s also associated with narrow-mindedness, and quite frankly, can date you in a bad way. I’m sure the intent was a call for action, but your reasoning and diction could be improved. Very simply, the guidance on the use of ‘var’ states that it should be used only if the type is repeated or obvious (without IntelliSense) on the right side of the equals sign; this simultaneously makes it easier to spot variable declarations while respecting the intelligence of the reader, who may not really care what the type is up front, especially if the name is well-chosen or the type name is long. A similar logic can be applied, per your preference, for the implicit new() operator, though I tend to see those used mostly on field initializers. Having said that, the Framework Design Guidelines recommends *against* the use of var, except when using ‘new’, ‘as’, or a hard cast, in which cases it is *permissible*.
-
code sexiness questionIt’s unlikely the compiler knows that no other thread may change the SelectedObject during the getter body. One should acknowledge that fact by capturing its value, not for speed or readability, but for correctness. That so many people find it trivial or overly optimizing is really scary. Pragmatically it may be a non-issue, but it’s a time-bomb and there’s arguably no justification for the “simpler” syntax. Concurrency is hard.
-
Clean Code, who missed the p(o)int... ?Agreed, static code analysis is designed to provide false positives over false negatives. What I actually meant was the simple linters that fix code style differences; if you cannot even agree on standards (rules such as "interface names should start with the letter 'I'", or "avoid Hungarian notation") then what are the odds you can agree on what's considered "clean"?
-
Obscure paths to serious bugsThe source of the majority of the problems I see daily is the mentality that 1) we don’t have enough unit tests, so rather than refactor and force QA/myself to retest all affected code paths, I’ll create a *new copy* of existing code and 2) there’s no way to know when my code changes actually cause dead code, so don’t ever worry about cleaning up.
-
Clean Code, who missed the p(o)int... ?Programmers who can't be bothered to use linting tools are rarely concerned with cyclometric complexity, or maintenance cost, or refactoring. Also, it seems the broken window theory applies to software: if it's already crap software, most people won't bother with writing good code, resorting instead to the metric of lowest cost to write (even if it means the user or company pays the price many times over).
-
WinUIDear ImGui is not used for graphics, but is used with a GPU to render a UI.
-
WinUIWhat most people fail to understand is that XAML controls are inherently scalable, which is one reason that Visual Studio 2010 was rewritten in WPF. While you could certainly create your own controls in either WinForms or XAML, I’ve never seen custom controls implemented with accessibility or true internationalization in mind, though that is also a rarity with even HTML. I recently learned of margin-block and margin-inline and it has made me reevaluate all layout work I’ve ever done, AKA blown my mind. Besides the skinning of controls, the one thing that constantly changes is the paradigm of UX…for better or for worse. Discoverability is horrible these days (Windows 8, swipe left/right, long “tap”) and seems to be overcome only by social reinforcement: old farts need not apply.
-
WinUISorry, but those buttons date the app to at least ten years ago, the title font even more like 30 years (I know because I was there). The icons are nice, but need more alpha channel to prevent jaggies.
-
What's your biggest Solution?!Hopefully all the project build output folders are set to the same folder. What really matters is the number of lines of code, methods, classes, and files.
-
Cosmetic vs More EfficientC# has the null coalescing operator
inVal = inVal ?? internalDefault;
Or with C# 8+
inVal ??= internalDefault;
It doesn’t get much cleaner and clearer that that.
-
NuGET PackagiesIndeed, I got the feeing we’re being trolled, or punked, at least.
-
NuGET PackagiesNuGet is a publishing platform. If a vendor up and disappears, your dependence on their library (maybe as an interface for their API or hardware) is still in danger whether they publish their library on NuGet, the web, or a thumb drive. At least with NuGet, you know the library is available in its present form, compared to a website the vendor stops paying for. In general, a dependency should be easily replaceable if it is intrinsic to functionality. And while larger vendors have a smaller chance of disappearing, a dependency without abstraction can still impose a risk to efficient replacement. Just ask Parler.
-
NuGET PackagiesSo I see three issues mentioned that have nothing to do with NuGet: 1. Version conflicts for common dependencies between libraries 2. Auto-updates 3. Package configurations which say a new version is compatible with your project NuGet doesn’t magically solve #1 for you, though it *can* update your references for the simple case, if you let it. Which highlights that #2 is not normally the default. Finally, #3 is the fault of the packager, not the deliverer. It’s easy to configure things wrong, and for the misconfiguration to go unnoticed if it’s new or exotic enough. What NuGet does help you solve is keeping your source control download small, especially if the bulk of your code is libraries, unless someone unknowingly checks in the “packages” folder.
-
Anyone used AssemblyScript?The use case for AssemblyScript is for those who want to write WebAssembly in a nicer syntax, or in an object-oriented way. While WASM is considered a "universal target" for other programming languages, there is a lot of overhead and massive toolchain that comes with providing the source platform environment and libraries. The AssemblyScript compiler can be told to remove even its minimal OO library and just compile to the bare-spec WASM binary format; this can be orders of magnitude faster to compile, download, and--in some cases--execute.
-
Git Source Control> Where is the code hosted at? The main thing to understand about git is that it separates the concerns of *version control* and *centralized storage*. Initially you can have version control stored locally, and then optionally synchronized to a server. The nice thing is that you can commit local changes, revert, switch branches, etc. without access to the centralized server. The reason this is possible is that you have a copy of the entire repository on your local, including all commits, and that allows you to work offline, for example. The hardest part is synchronizing your local copy with a copy elsewhere, called a "remote". It's not usually bad, but it can get pretty complicated. See https://medium.com/@ottovw/forget-git-just-use-subversion-or-perforce-8412dc1b1644[^] One way to understand why git is the way it is, is to understand the design rationale behind it. Linus wanted the ability to work while offline, so he could work while traveling: this means that check-out (aka locking) is abolished, merging is the standard approach to check-in (aka commit), tooling keeps track of changed files and also handles branch switching, and viewing history/blame/commit graphs is fast and optimal. It's meant as a DVCS, with a robust security model and scalability to handle projects like Linux (and enables monorepos, for example). I used to hate git, because it makes things twice as complicated as, say, SubVersion. Honestly, most of the reasons for using git are not applicable to most enterprise development, unless you have massive codebases and teams. Probably the main reasons companies embrace it is 1) everyone else is doing it and 2) it's free. These days, especially after having to deal with even "modern" TFS and its lack of performance, mainly due to its coddling of the developer (omigosh, everything--especially merges--has to go through the server in case a workstation suddenly blows up), I find I'm liking git more and more. There are alternatives like Perforce or PlasticSCM that do DVCS well, if that's a requirement, but they also cost money. If you're just doing a small project, a local git repository occasionally synced with a remote is trivial to set up and easy to use. One may argue that its learning curve makes its cost non-zero.
-
HDMI or USB/DAC?> is it a maximum of 192 kbps as I have heard mentioned? The *signal* rate (bits/second) of USB is far better than what's needed for the *encoding* rate (bits/second) of the audio files/streams, especially USB2 which can do 480 Mbps. The *sample* rate (samples/second) of a DAC is multiplied by the bits/sample and number of channels--usually two--of the inputs to get the number of bits/second needed to feed it at the rate for highest quality. For CD audio, this is 1.411 Mbps, which is even achievable by USB1. > but has lower sound quality, probably due to being a cheaper bit of hardware Even a "cheap" DAC can do 192 KHz at 24 bits/sample but needs 9.2 Mbps to do that at highest quality. Mainly what determines sound quality is the weakest component, of course. The analog stage electrical isolation (especially from the power source) tends to have the biggest impact these days, because fewer discrete components are needed in capable designs. A battery-operated device can achieve that isolation very easily. The digital bus (USB, I2S, SPDIF, etc.) can generally be considered equivalent to an audio noise wall, even with relatively cheap cables connecting the system nodes. Now, the perceptual encoding of Adagio+ is below even the "encoding" rate of CD audio, but it's unlikely you will be able to hear much of a difference between that and a CD. OTOH, if your system--including your ears--can distinguish between CD and SACD rates, then you may need to invest more in a better DAC.