Richard Dawkins has written about this at some length. He used a recipe and a blueprint rather than programming language and a database as his metaphors, but the point is the same: DNA is much more like instructions than it is like data. See for instance this blog[^] (can't vouch for its quality, just googled quickly and superficially it looks relevant).
dojohansen
Posts
-
Is dna somewhat of a programming language or database -
Saw something strange...Yes. And I think someone was wide awake and wanted to create this littly mystery. AFAIK you can only run one instance of snipping tool. :) One other possibility I can think of is a remote desktop session. You could pull up the tool on the remote computer and then be distracted, say by an email, and then fire up the tool on the local machine. The tool captures remote desktop output just as easily as anything else on screen, so I suppose this way you could end up with a capture of snipping tool showing it's fresh capture.
-
Saw something strange...That's nothing! We lived for three months in a brown paper bag in a septic tank... Python references aside, I think I can top that. I've actually seen a *system* where they did something about as clever: One program generated contracts from document templates and some data. The resulting doc was then turned into a TIFF image by automating Word, server-side, and print it using a TIFF printer driver. This image file was then run through OCR logic to extract from the document the data that you originally started with in order to generate the document. The reason for this wonderful design? Everything else that was to happen whenever a contract was entered into was tightly coupled to a scanning solution from a time when all contracts were received physically by snail-mail and scanned in order to be processed by computer. The scanning solution wasn't actually very old, but the people who owned it had more influence than the people owning documents. It's amazing what convoluted silliness can survive out there in the wild. And that, in part, is why I can believe Elon Musk when he says it's perfectly possible to build a transportation system that costs 1/10th of a bullet train to set up, is safe, runs on solar, and about twice as fast as an airplane... Of course, our industry still easily gets the top spot for wasteful idiocy. Taking a few bytes of character data and turning it into a TIFF image of millions of bytes most of which are not even related to the data of interest, then doing OCR to get (most of) the data back again (most of the time) is surely serveral million times more complicated, and several million times more work, than almost any straightforward mechanism. Maybe there is hope for the future. If we can be this crazy in software, who's to say existing transport systems, or energy useage in general, isn't simply the result of narrow thinking and attempts to improve what already exists? By starting with a clean sheet, it may well be possible to do radically better in a lot of areas.
-
Saw something strange...1. CTRL+PRINTSCREEN 2. WIN+R => "pbrush" 3. CTRL+V 4. Crop 5. CTRL+S (alternatively, CTRL+A,C if your mail client lets you paste images into mail)
-
Epic Visual Studios "No Code"MS' standard approach to everything these days seems to be to make something that sometimes works and can possibly be made to work eventually, with luck. Nobody seems to give a damn if anything is *correct*. Your particular story sounds like a corrupted build cache. Ever noticed the "clean" menu item? That is specifically for working around when VS has messed up it's caches and make sure everything gets made from scratch. Granted, having a build cache does help a lot with build speed in many cases, as VS sometimes manages to correctly work out what has changed and must be rebuilt, and what hasn't and can be taken from the cache. I'll also grant that it may not be completely trivial to ensure the cache status is always correct, given that you may change files in all sorts of ways besides within VS, and given that build actions nowadays may encompass all sorts of things besides just compiling some code (e.g. code generators often execute immediately prior to build). Even so, it does amaze me how VS sometimes manages to mess it up all by itself. The simplest solution consisting of a single console application project with a single Program.cs file and doing absolutely nothing outside of VS may still cause it to stubmle. But this sort of thing is perfectly in tune with how VS behaves in other respects. It can't modify a file because "another process" is using it, and it turns out it's VS blocking VS. It confidently asserts "all files are up to date" when solution explorer shows a folder with hundreds of files, and your working folder is empty. And for most of these, what do they do? Fix it, so VS behaves correctly? Oh no. They add a special menu option and name stuff so it seems as if YOU are at fault rather than VS. I chuckle whenever I need "get special version" in order to get the latest version and the dialog offers me the choice to fetch files even when the local version matches the specified version. Now what could *possibly* be the point of spending time to replace a file with an identical file? Clearly, MS knew full well about the problem, but was too embarrassed to honestly state "get file even if VS believes it already has it, as it sometimes mistakenly thinks so".... I'm sure others could add many other examples of this general pattern of sketchy workarounds on top of semi-working base functionality. At least the glory days of VSS have passed - some people lost their entire source history due to it's tendency to occasionally corrupt it's own database files...
-
URGENTZ: So what's the current state of affairs with "The Microsoft Way" of web development?I don't think the future will be asp.anything to be honest, but what do I know. :)
-
URGENTZ: So what's the current state of affairs with "The Microsoft Way" of web development?Azure is their only hope, isn't it?!? They are losing rapidly on anything with a UI, but many of their servers are doing well. If the world decides servers ought to live in the cloud, they ought to be well positioned to offer the best way of running their servers in the sky, and since their servers are used quite a lot... this is a big opportunity. And the whole platform-as-a-service thing is smart because it offers the same kinds of lock-in that other platform solutions do...
-
MS has lost it all! Time to move on?I don't see that you made a point. :) Nobody's saying, as far as I've noticed, that MS hasn't ever done anything great. To the contrary, it's being said MS is past its golden days. I'm a developer living in an almost entirely MS universe; Windows, MSSQL, BizTalk, SharePoint, IIS, .net, msmq - and VS, TFS, and so on. And although I have limited experience with the competition's servers and devtools, I like a lot of the stuff MS gives us, and believe I would need lots more tools (making life more complicated) if I were to make the same software without MS products. And still I think MS is basically shot. They have so much legacy that they can't change fast enough to have a chance. Technological breakthrough nearly always means disruption, but not that things remain unstable forever. MS was disrupted by the internet. Just like they had a stronghold in the PC market, others now control the new markets, and MS is now one of the many players that face barriers to entry, rather than being the one setting up and maintaining them.
-
MS has lost it all! Time to move on?I agree, but would also mention an elephant in the room you overlooked: LEGACY. Microsoft have lots of skilled people, but just like all other behemoths they aren't in a position to just make something brand new, stop spending resources on their old stuff, and ignore the existing customer base. Which is just to repeat that they have legacy. :)
-
Website DevelopemntWow. I'm amazed you bother to answer with what was actually asked for. :) Personally I feel the responsible answer is something along the lines of "unless this is a pet project to learn, you aren't ready for it; either study and practice for a long time, or hire someone who can program to do the job". The question betrays that we are dealing with an absolute beginner (by assuming that there is one way in which websites are or should be developed).
-
Application efficiency?Now you're just teasing me...? It is nothing like a functional language. It's plain old procedural code. And you can of course write procedural code in C#, in methods in classes. But that doesn't make it OOP. :)
-
C# obtain return code from a proxyI'm not sure what you mean by that. It should "expose" the return value *as* a return value. Externally, it's useage should be identical to your original proxy, but internally it should call the proxy, update tracking information, and return whatever the proxy returned. Perhaps we are talking past one another. What I mean is you have some generated proxy like this:
class Proxy
{
public int Foo()
{
// ...
}
}and some code that uses it,
class UserCode
{
void Bar()
{
var x = new Proxy().Foo();
}
}You can now introduce a "metaproxy",
class MetaProxy
{
int clientID;
Proxy proxy = new Proxy();public MetaProxy(int clientID) { this.clientID = clientID; } public int Foo() { var x = proxy.Foo(); Tracking.Register("Foo", clientID, x); return x; }
}
Lastly of course the user code must be modified to use MetaProxy instead of Proxy. I don't know the details of what sort of tracking you really need to do. Nor do I know, or want to know :), everything needed to say if this is how you should obtain the information you need (examplified by "clientID"). If you have this information everywhere you are making such calls, perhaps this is a good way of doing it. If you don't, and this is in an application processing requests (so that everything that happens in a thread happens on behalf of a particular client) perhaps it'd be nice to put the clientID in a ThreadStatic instead, and then your MetaProxy could just have a default constructor and sniff out the clientID from there. So you still have to do your own thinking. But hopefully this will make it clear how I propose you can establish tracking. Of course just keeping track of things is not going to do anything more than that.
-
Application efficiency?Actually I made a mistake; correct would be no instance *state*, i.e. no instance fields, only local variables and method parameters. But it sure can work also with only static methods, though in that case you of course *also* lose the ability to use polymorphy. Note that this doesn't imply there should be any static fields. They too would live on the heap and represent shared state. Come to think of it, "isolated state" or "local state" would actually be better names for this type of architecture than "stateless". And again, personally I'm no fan of the beast.
-
strategy for correcting the side-effects of using Dock (which ignore Margin settings)I find your question confusing, because I can't recall having seen a single Windows Forms control where the Margin settings actually produce any effect. I've always had to reach for Padding instead. But this does lead to just a quick suggestion: try to use padding instead, and let us know if that works.
-
C# obtain return code from a proxyWell that's what we get with metaphores. If you're speaking pattern what you're saying may be well-defined, but as long as we just use metaphores things are always kinda vague. A proxy is a kind of facade and also a kind of abstraction layer, wouldn't you say? But to my mind, the facade metaphor is a little inappropriate here because the thing about a facade is that it looks completely different from what it's hiding. That's usually the point of having a facade, to cover the ugly stuff. :) A proxy on the other hand looks just like the thing it is a proxy for. So here I think we're making a proxy for the proxy. But rather than start explaining all this and use "metaproxy" I resorted to the more general conception of an abstraction layer. Of course, in practice one would only expose those bits of the interface that one actually uses. So then I suppose it kinda acts as a facade too... Gee it's hard to say anything sensible about this stuff. As a programmer one ventures to the borders of philosophy all the time. But then that's one of the things I find appealing! :D
-
Array of Double - Shallow vs. Deep Cloning?As to whether that difference is *important*... well, that completely depends on what you do with them. Say you have a User object and this involves the concept of a list of Favourites (products, girls, doesn't matter). It'd probably be a List<Favourite> rather than a Favourite[], but no matter. The point is if for some reason you wanted to let one user copy another user's favourites, what should you do? Just point to the same list? Or make an actual copy of the list? Surely you should copy it, that way if user A subsequently edits his copy, the other one is not affected. What about the Product objects themselves? Well, there's no need to clone them unless they "belong" to the user. If they are shared, you could happily point to the same Products, and both users would see any changes to products they have in their respective lists. -- I know this isn't the right forum for it, but I gotta ask: Are you really from Hell, Eddy? A lot of English speakers claim they're from Hell, but most of them don't even know where it is! Look up "Hell, Stjørdal, Norway" on Google Maps, and you won't be one of them. ;)
It depends. It *always* depends.
-
SQL Connection problem in Windows ServiceCould be that, could be a firewall, could be something else. Before trying to fix it though I'd introduce a minimum of logging in the service so in future you don't have to stare blindly at that nearly information-free message: "the service started, and then stopped unexpectedly". It's also nice to have the ability to debug service initialization. To do this, introduce a config setting and put a method like this into your service class:
static void AwaitDebugger()
{
string s = ConfigurationManager.AppSettings["debug"];
if (s == null || s.ToLower() != "true") return;DateTime until = DateTime.Now.AddSeconds(30); while (DateTime.Now < until && !Debugger.IsAttached) Thread.Sleep(50); if (Debugger.IsAttached) Debugger.Break();
}
Now in the service main thread, just insert a call to this method before doing anything else. (Not in the start method that probably simply starts the main thread; you want this method to return immediately as before. But in the thread it starts, you want to wait for a debugger to attach if so configured.) Build, deploy, and debug!
-
Multiple Versions Of ApplicationA custom BUILD doesn't have much to do with configurability (run-time metadata as you call it). All of this is really architecture considerations to have in mind if you want to make it easy to create and deploy custom builds. None of it really addresses how to create custom builds. That is all about managing the code, and the short answer is "branches".
It depends. It *always* depends.
-
Multiple Versions Of ApplicationA custom build is necessarily a different BUILD, so things like run-time metadata is out the window. :)
-
DataCache DesignIt depends. If it's ADO.NET and SQL2005 or higher, you can define a "cache dependency" that automatically refreshes cached stuff when it changes in the database. I suspect it relies on the fact that SQL2005 can host the CLR to have a way to swap the ordinary roles (ordinarily we use database servers as... SERVERS, but here the *server* needs to notify a *client*, which simply isn't possible given the definitions of what a client and a server is). If it's ADO.NET and an older SQL server version I believe there's a cache dependency that *might* help. It depends. This thing uses a polling model, which means there's a delay involved. Dirty cache reads are still possible. If dirty reads causes serious problems, you can't use this. If seeing the old data is OK for a short time after it has really changed, you probably can use it. Otherwise, you probably have to write the code yourself to keep the cache in sync. If the data is only modified by a single instance of your own application this isn't really difficult to accomplish. If there are multiple instances it gets more complicated, unless the instances are operating on isolated subsets of the data, since you will now need to establish some mechanism to ensure caches are kept in sync. It will also mean a performance hit and will require distributed transactions. Cache invalidation is one of the hardest problems in all of common practical programming. In many cases, there are better practical approaches. Rather than trying to solve the cache invalidation problem, it may be easier and good enough to combine imperfect caching with optimistic concurrency. Concurrency is an often-ignored aspect of multi-user systems that you should probably address anyway (most apps should, but few do). Depending on how you handle concurrency violations (e.g. you could let the user resolve conflicts) having this in place often makes it acceptable to live with some dirty cache data. You can use a timeout policy to prevent old data from sticking around for too long. Volatile data should perhaps not be cached at all. The ideal cache item is one that is expensive to get but changes infrequently. And never cache stuff that is inexpensive to get for a very long time. Saving a millisecond makes sense in something that happens often, but whether you save a millisecond every minute or every hour is not going to make any difference to the performance of your system. So in such cases, use a short timeout. Also don't forget that there are frequently opportunities to influence how expen