Apparently, open source does not necessarily imply open mind... I've worked primarily for companies developing commercial software and can definitely see what you are getting at. GPL licensing is practically useless for incorporation into traditional commercial applications as it requires the source for your company's app to be distributed as well. I once needed to find a C++ wrapper for a complex Win32 API that I wasn't familiar with. The first one I found had GPL headers. I emailed the author of the library and he said just "remove the headers and do what you like with the code". Yeah right. The guy likely just copied the headers from somewhere without knowing what its purpose was. I quickly moved on to another article at CodeProject instead and used it to understand the API better and write my own wrapper. No more licensing bs. If you're going to hand out free software, you should let people use it how they like (I mean use, not republish, plagiarize, resell as code library, etc). Correct me if I'm wrong, but as I understand, the default licensing of article code on this site is not as onerous as GPL. If it were, I doubt this site would be as popular as it is.
ha_ha_ha
Posts
-
Open Source -
Double buffering dc with user mapping modeTry applying the exact same transformations and drawing to the memory DC only. When done, call bitblt like this: outputdc.BitBlt(0, 0, clientRect.Width(), clientRect.Height(), &memdc, 0, 0, SRCCOPY);
-
Large data setsMy guess is you are running out of virtual memory still. Other members' suggestions such as using STL list or memory mapped files sound good to look into more as well. But I would also consider whether your application really needs to work with the entire dataset in memory all at once. For example, is it possible to allocate a fixed cache of say 1 million points, and read in a million points, process them, write them back out, etc. The general idea is to see if your requirements allow you to load/unload just a portion of the dataset on demand, rather than all at once up front.
-
Two observationsI think that employers put too much emphasis on having prior experience when considering new hires. After 10+ years developing commercial C++ software in four different companies, I've observed that the worst programmers in your place of work tend to be those that: 1) Type very slowly. I don't mean they type awkwardly or are not touch typists. I just mean they are slow. 2) Have little to no formal training in computer science. I've worked with many computer/electrical engineers, math/physics grads who have taken a computer science option in university and are excellent programmers. But those same types of grads without any formal training usually suck big time. My observation is based on my own experience, not in regards to members of this site. There may well be many great articles on CodeProject written by people who are self-taught programmers, etc. These people likely have a gift or talent for programming.
-
Articles and 1 votesMy guess about some possible reasons for someone to do this (in general): * Author annoyed the voting member with some comment in the past, and this is a simple case of revenge. * Voter has a competing article in the same category and wants to vote the author down to lower their rating. Possibly related to ensuring their article or other articles they prefer instead to make it into the monthly article competition. * Voter thinks the rating is too high. They likely don't think the article is a 1. They just want to knock down the rating for no particular reason at all. Basically, I am saying that some people have an agenda when voting. They have some kind of bias. For example, I think some people will vote for an author higher, if they have something in common with the author (e.g., nationality). I'd like to see membership levels granted solely on the basis of article and message count, not length of membership. And restrict voting to members who have contributed something - messages and/or articles. People with 0/0 should not be allowed to vote. I'd also like to see a volunteer panel of CodeProject members/staff that ensures an unbiased component to the rating of any article.
-
Large data setsI thought a double was 8 bytes. With 29 million xy points, that's over 400 meg I think. Anyways, another idea is to define an XYPoint class to encapsulate your pair of doubles, and then store pointers to XYPoint in your vector instead. When you close a dataset, don't delete all of the points - you could return some of them to a free pool (up to 50 meg say) so that the point objects could be reused when loading the next dataset.