How, VS Code? This is amazing.
-
The intellisense for C++ in VS Code - at least when it works, is nothing short of incredible.
template
using bgrx_pixel = pixel<
channel_traits,
channel_traits,
channel_traits,
channel_traits;
using rgb18_pixel = pixel<
channel_traits,
channel_traits,
channel_traits,
channel_traits,
channel_traits,
channel_traits;
What you're looking at is two arbitrarily defined pixels. One is N-bit pixel where 3/4 of the bits are used, and the second example is a 24-bit pixel where 18-bits are used. That's not really important, but the channel names are, because consider this:
rgb18_pixel::is_color_model<
channel_name::R,
channel_name::G,
channel_name::B>::valueIf you know C++ you can tell there's metaprogramming magic here. What I'm doing is querying a "list" of channel traits at compile time looking for ones with particular names. The thing is, if you hover over value, the extension will resolve it to true in the tooltip that pops up - no easy feat. More impressive even is this:
using color18_t = color;
auto px = color18_t::gray;It will determine the actual numeric value for that color and display it when you hover over "gray" (2155905024) You think that's easy? No.
constexpr static const PixelType gray = convert(color::source_type(true,
0.501960784313725,
0.501960784313725,
0.501960784313725));Notice it's running a constexpr function
convert()
to get the destination pixel format. This is a non-trivial function. So one of two things is happening here. Either the C++ extension for VS Code has a compliant C++ compiler front and middle built in (I suspect it does) or it is managing to link itself to existing compilers like GCC tightly enough to determine this output (which doesn't seem possible to me) Either way, go Microsoft.Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
The intellisense for C++ in VS Code - at least when it works, is nothing short of incredible.
template
using bgrx_pixel = pixel<
channel_traits,
channel_traits,
channel_traits,
channel_traits;
using rgb18_pixel = pixel<
channel_traits,
channel_traits,
channel_traits,
channel_traits,
channel_traits,
channel_traits;
What you're looking at is two arbitrarily defined pixels. One is N-bit pixel where 3/4 of the bits are used, and the second example is a 24-bit pixel where 18-bits are used. That's not really important, but the channel names are, because consider this:
rgb18_pixel::is_color_model<
channel_name::R,
channel_name::G,
channel_name::B>::valueIf you know C++ you can tell there's metaprogramming magic here. What I'm doing is querying a "list" of channel traits at compile time looking for ones with particular names. The thing is, if you hover over value, the extension will resolve it to true in the tooltip that pops up - no easy feat. More impressive even is this:
using color18_t = color;
auto px = color18_t::gray;It will determine the actual numeric value for that color and display it when you hover over "gray" (2155905024) You think that's easy? No.
constexpr static const PixelType gray = convert(color::source_type(true,
0.501960784313725,
0.501960784313725,
0.501960784313725));Notice it's running a constexpr function
convert()
to get the destination pixel format. This is a non-trivial function. So one of two things is happening here. Either the C++ extension for VS Code has a compliant C++ compiler front and middle built in (I suspect it does) or it is managing to link itself to existing compilers like GCC tightly enough to determine this output (which doesn't seem possible to me) Either way, go Microsoft.Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
Totally agree. And the Copilot has helped me greatly with boilerplate code.
The difficult we do right away... ...the impossible takes slightly longer.
-
Totally agree. And the Copilot has helped me greatly with boilerplate code.
The difficult we do right away... ...the impossible takes slightly longer.
I haven't messed with Copilot. I'm "AI" averse, and will probably remain so until they get better. I like Visual Studio's AI integration because it's explicit - you have to smash tab at each step and it shows you what it will do next. It's important because it's so often wrong. I'm not sure if Copilot works like that or something else, but honestly, I can pretty much think in C++ at this point, so it's almost more effort to have to prod an LLM to give me the code I want. By the time I do I could have figured it out with going from C++ to English to English to C++.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
I haven't messed with Copilot. I'm "AI" averse, and will probably remain so until they get better. I like Visual Studio's AI integration because it's explicit - you have to smash tab at each step and it shows you what it will do next. It's important because it's so often wrong. I'm not sure if Copilot works like that or something else, but honestly, I can pretty much think in C++ at this point, so it's almost more effort to have to prod an LLM to give me the code I want. By the time I do I could have figured it out with going from C++ to English to English to C++.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
Yes, that's the same way that Copilot works.
The difficult we do right away... ...the impossible takes slightly longer.
-
Yes, that's the same way that Copilot works.
The difficult we do right away... ...the impossible takes slightly longer.
Interesting, thanks. I might tinker with it, if it's free.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
I haven't messed with Copilot. I'm "AI" averse, and will probably remain so until they get better. I like Visual Studio's AI integration because it's explicit - you have to smash tab at each step and it shows you what it will do next. It's important because it's so often wrong. I'm not sure if Copilot works like that or something else, but honestly, I can pretty much think in C++ at this point, so it's almost more effort to have to prod an LLM to give me the code I want. By the time I do I could have figured it out with going from C++ to English to English to C++.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
I started playing with Copilot a couple days ago. I found the experience a bit freaky at first, kind of like the thing being inside your head, anticipating what you want to do. It was really good at first, but as the application I was working on evolved, getting into the more detailed items, it started writing code with mistakes in it. I wanted to see what it could do, and started with letting it write the code as much as possible (you know, like the typical QA question at CodeProject. :)). I started with a simple FileSystemWatcher app and described that I wanted a WPF app that monitored a directory and its subdirectories and logged all the changes it saw. It generated a simple app with a FileSystemWatcher, but more like a Console app with no XAML UI. It wrote everything with no problems, working as expected. Impressive, but I wanted to see how far it could go. Next, I told it to change the app to a XAML UI, using MVVM and a TreeView, and keep that updated with changes as they happened in the file system. It explained everything, walking me through all the code changes and what I needed to add/remove. It came up with the hierarchy model and the correct XAML bindings and everything! That was mind-blowing! I didn't expect it to figure out how to rewrite the code for an entirely new UI. Talk about freaky! I was just evolving the app, one step at a time, describing what I wanted the app to do, and it was coming up with the right suggestions, and that's when the mistakes started. The first mistake was a minor issue with a XAML binding that it didn't wire up correctly, one-way instead of two-way. I pointed that out and it actually said I was correct and came up with the correct, simple fix. Wait, this thing can figure out its own mistakes?! OK. Let's give it something more difficult to understand. At this point, the window was split into the TreeView on the left and a log of FileSystemWatcher events on the right. I got it to coloring the event messages, based on event types, like Changed, Created, Deleted. It put together a ListBox with a TextBlock for each item in the list, with the correct DataTriggers for each message type and the correct Foreground and Background property setters. I told it to change the ListBox to a ListView, and it correctly rewrote the XAML and kept all the formatting. I told it "that didn't look right to me and to back out that change," and it did exactly that, going back to the ListBox code. Mind blown! Very impressive so far. Let's dig into a little more detail. The background of
-
I started playing with Copilot a couple days ago. I found the experience a bit freaky at first, kind of like the thing being inside your head, anticipating what you want to do. It was really good at first, but as the application I was working on evolved, getting into the more detailed items, it started writing code with mistakes in it. I wanted to see what it could do, and started with letting it write the code as much as possible (you know, like the typical QA question at CodeProject. :)). I started with a simple FileSystemWatcher app and described that I wanted a WPF app that monitored a directory and its subdirectories and logged all the changes it saw. It generated a simple app with a FileSystemWatcher, but more like a Console app with no XAML UI. It wrote everything with no problems, working as expected. Impressive, but I wanted to see how far it could go. Next, I told it to change the app to a XAML UI, using MVVM and a TreeView, and keep that updated with changes as they happened in the file system. It explained everything, walking me through all the code changes and what I needed to add/remove. It came up with the hierarchy model and the correct XAML bindings and everything! That was mind-blowing! I didn't expect it to figure out how to rewrite the code for an entirely new UI. Talk about freaky! I was just evolving the app, one step at a time, describing what I wanted the app to do, and it was coming up with the right suggestions, and that's when the mistakes started. The first mistake was a minor issue with a XAML binding that it didn't wire up correctly, one-way instead of two-way. I pointed that out and it actually said I was correct and came up with the correct, simple fix. Wait, this thing can figure out its own mistakes?! OK. Let's give it something more difficult to understand. At this point, the window was split into the TreeView on the left and a log of FileSystemWatcher events on the right. I got it to coloring the event messages, based on event types, like Changed, Created, Deleted. It put together a ListBox with a TextBlock for each item in the list, with the correct DataTriggers for each message type and the correct Foreground and Background property setters. I told it to change the ListBox to a ListView, and it correctly rewrote the XAML and kept all the formatting. I told it "that didn't look right to me and to back out that change," and it did exactly that, going back to the ListBox code. Mind blown! Very impressive so far. Let's dig into a little more detail. The background of
I've been tinkering with these free AI tools for a while now, specifically ChatGPT, Copilot and codepal. The only one I've tried for generating code was codepal. Not very impressed: it only generates code snippets ('functions'), and after very few such short snippets it reaches the limit for free usage. For not code related questions, I found ChatGPT far better than Copilot. It used to be really bad just a few months ago, but it significantly improved. It reaches the limit for free usage pretty fast though, requiring a few hours of timeout. Copilot is just pathetic, in my experience, at this time. Both Copilot and ChatGPT give quite often wrong answers - they don't 'realize their own mistakes', but rather admit them when pointed out, leading to 'I apologize for the confusion/misunderstanding/frustration...'. Almost never 'for the wrong answer'! They're trained well ;) What they're not trained to say is 'I don't know', which I'd prefer, instead the constantly wrong answers. I usually know the correct answer, or at least I know when the answer is wrong, but what happens if I don't? They're nice toys, but not really useful and trustworthy in my opinion. Maybe I'm not very good at asking questions (they can understand), but I do it in a very clean manner and provide enough context, so I doubt that. I would be myself impressed with any decent answers (nevermind code generation), because my expectations are pretty low, but so far I'm not. Your Mileage definitely Varies!