You can have expose for windows. There is an excellent tool called Switcher that will give you the identical effect. It works seamlessly along side all of the other windows switching features of windows as well, including ALT-TAB, Flip3D, Aero Peek, live thumbnails, etc. Switcher: http://insentient.net/ You can feel free to buy a Mac, but owning one myself, I can tell you, their simple beauty is simple for a reason. You can't do a fraction of the things you can do with Windows on a Mac. Windows has and probably alwasy will be a more powerful operating system. With the advent of Windows 7, I believe it is far superior as well.
Jon Rista
Posts
-
So there was this guy sitting next to me on the train with a Mac notebook... -
Simple ASP.NET web control inheritanceYou can inherit code, but not markup. If you want to have a hierarchical control model, you would need to design your base controls to use either manually rendered html with helper methods that inject html rendered from child controls, or use control compositing and helper methods that allow child controls to inject their own content.
-
Using MapsYou should also look at Microsoft Virtual Earth API's for .NET. You get a complete .NET framework for integrating Microsoft Live maps without all the hassle of wrapping Google's API's for use with .NET first.
-
Petition for WCF4: No ServiceContract or OperationContract attributes!With the .NET 4.0 looming on the horizon, and my current war with WCF trying to make it compatible with my POCO/POCI (Plain Old CLR Interface ;P ) architecture, it seems time to petition Microsoft to eliminate the need for those pesky
ServiceContract
andOperationContract
attributes on service interfaces. To compensate, they need to add the ability to define service and operation contracts through configuration, so we don't completely loose the ability to configure messages, provide versioning, define service namespaces, etc. Who else here loathes theServiceContractAttribute
andOperationContractAttribute
as much as I do, and wishes Microsoft would remove them as a requirement, and provide an alternative configuration-based approach to defining contract metadata? Voice your vote here! (And perhaps, if there are enough votes, the CodeProject staff could get the Microsoft .NET 4.0 guys to come take a look.) -
Disassemble Linq queryActually, I made a small mistake in my previous comment. What the C# compiler translates your code to isn't specific to itself...its specific to IL. For example, IL supports the fault block in a try/catch statement, but normal C# code does not. Reflector doesn't really have an option when it encounters a fault in IL...so it puts a fault {} block in the C# code it generates.
-
Disassemble Linq queryIs there any way you can post the full function from reflector? It seems that h__TransparentIdentifier1a is actually a variable or field or something, beyond the scope of the LINQ query, as is evident here:
select new { <>h__TransparentIdentifier1a = <>h__TransparentIdentifier1a, pi = pi }
This creates a new anonymous type who's first property is a reference to some variable called <>h__TransparentIdentifier1a that is not within the scope of the LINQ query...so if it compiled at all, then that identifier must exist somewhere other than the LINQ query. -
Disassemble Linq queryJudah Himango wrote:
I've seen bugs in Reflector, though, where it doesn't generate valid C# code.
They arn't bugs, it is generating the internal intermediate notation the C# compiler generates when it translates syntactic sugar into something it will compile internally. For example, the fault{} block of a try/catch/fault/finally, is only supported internally by the C# compiler, not in normal code use. The notation where types, functions, and identifiers start with <> is also an internal notation, and is technically valid, just not in normal use. You can see the same identifiers when you use ILDasm.
-
Performance in C# applicationI would recommend LINQ to SQL. You can build a conceptual model with L2S, then add stored procedures that return objects from your model. You are still able to use SPs, but you are still able to strongly typed objects rather than data sets. L2S is not slow if you use it for what it is...an ORM. The SQL generated by L2S is actually very efficient. If you use it with procs, you won't really see any of the performance benefits...but, neither will you run into anything that could cause performance problems either (i.e. working with huge object graphs and their changes, which can get kind of hairy). Entity Framework is premature. It has potential, but its up to Microsoft to realize that potential. Currently, EF is very intrusive and heavy. It will work great for non-distributed apps where the clien app is not separated from its business by web services or remoting. However, if there are web services separating your presentation from your domain, then EF is a real disaster. There is another free ORM, called nHibernate. It came from the Java world, so it doesn't fit well with Microsoft standards, but it is one of the better ORM's out there. It still has some of the problems that EF does as it is a bit intrusive, however it does generally support POCO and PI, so its currently a better choice than EF if you want a real ORM. Since you are currently looking to use stored procs, I would defintly go with L2S. Its the simplest solution that will get you the quickest results with procs.
-
Performance in C# applicationI can't say exactly what the memory load would be. It entirely depends on the size of your records. If they are all maximum size (aprox 8000 bytes, barring varchar(max) columns), and factoring in .NET overhead, 100000 records will use approximately a gig of memory. That is JUST for the rows...that doesn't factor in any other memory used by your application. Assuming you want this application to work on 32bit systems, then you will really be pushing it, as each app in 32bit windows gets a maximum of 2Gb addressable memory space. That aside, there are much deeper concerns than memory consumption here. You are trying to cache ALL of your data in memory. Its not as simple as sticking all your records in a collection. There are usually relationships between data, concurrent updates, general data integrity, etc. to worry about when you cache that much data. Despite the fact that you have already started development, I think you need to take a step back and really evaluate what caching all your data means to the long-term sustainability of your project. You may think that continuing on will save time and money...but you are just as likely to introduce some very complex scenarios due to your caching that will need to be addressed down the road, and that could cause your long-term costs to explode far beyond any short-term costs of reevaluating your approach.
-
Disassemble Linq queryThats because it starts with <>. You gotta remove all of the <> at the beginning of any identifier..dotted or not: <>h__TransparentIdentifier1b.<>h__TransparentIdentifier1a.b.brand_id.Equals(brand_id) should be: h__TransparentIdentifier1b.h__TransparentIdentifier1a.b.brand_id.Equals(brand_id)
-
Performance in C# applicationSuch extensive caching is usually not required unless you have some tremendous load on your servers. Generally speaking, you should avoid caching as muc has possible because it adds an additional level of complexity that must be managed in addition to managing the data in the database. You greatly increase the risk of data corruption but caching everything all the time, because it is now your responsibility to make absolutely certain data integrity constraints are met, rather than letting an RDBMS do it for you. Not only that, you will indeed greatly increase your memory footprint, and that footprint will grow as the usage of your application grows to the point where you can't keep everything in memory at all times. If you already have performance problems...then there are other ways to solve them...such as scaling out your hardware (better hardware, more servers, etc.) If you do not have performance problems yet, and are trying to preemptively solve possible performance problems...don't bother. It is difficult to predict what may cause performance problems, and caching isn't always the best solution, and should generally not be the first. You can gain much more in terms of performance improvements for less cost by adding hardware than by increasing the complexity of your code.
-
Disassemble Linq queryYou have the original query. Just change <>h__TransparentIdentifier1b into something compilable...such as h__TransparentIdentifier1b. The <> prefix is a notation the C# compiler uses when it transforms your source code into something that can then be turned into IL, but its not actually compilable. Just remove that prefix, and you should be fine.
-
Is the a max row count for SQLDataReader?....never ever.....ever.....ever. :suss:
-
Is the a max row count for SQLDataReader?I have to agree with Luc, I think your running into a problem on that particular record, its throwing an exception, and the exception is getting swallowed. Stick a breakpoint in your catch clause and see what is going on.
-
Is the a max row count for SQLDataReader?Well, my guess is that the sheer number of records your processing in-memory is just flat out insane. :P The problem is most likely not a row count limitation of SqlDataReader...but rather a memory limitation. You are trying to create a massive List of Record objects...the overhead for that is going to be fairly high compared to just displaying rows of text in, say, Sql Management Studio. Depending on exactly how many columns you are setting on your Record object, the data types of each, and the lengths of strings that may exist in your result set...you could be looking at GIGS of memory usage here. I am not exactly sure what your doing or what your needs are...but you should look into batching your work. Either chunk it up into significantly smaller data sets (say, 10000 records each) and process them one at a time...or distribute those chunks out to a server farm to process them in parallel. If you absolutely need to process all 8 million records at once...then you are probably doing something that calls for LAPACK or some other matrix or vector processor that can efficiently handle large data sets.
-
two difference javascript alert using check boxes and delete buttonYou need to wire up the javascript call on the server side. You will only really be able to get the client ID on the server, where all that information is readily available. Something like the following:
void Page_Load(...)
{
chkMyCheckBox.Attributes.Add("onclick", "jsVerify('" + chkMyCheckBox.ClientID + "')");
} -
Filtering LINQ query into another 'var' variable [modified]You shouldn't have to use First or FirstOrDefault. When you create a linq query, the result you get out (assuming your not running an aggregate function or First/Last/FirstOrDefault/LastOrDefault) is not the actual results of the query...its ultimately and IEnumerable that defines what the query should return when it is enumerated. You should be able to apply multiple modifications of your query before you actually iterate it:
var query = from t in dc.Table_Name select new { t.id, t.name, t.description };
IEnumerable queryOfID = from t in query select t.ID;foreach (int id in queryOfID)
{
Console.WriteLine(id);
}The above should work. The only scenario where it may not work is if you iterate over query once, then try to create queryOfID, and iterate over that. Even with LINQ to SQL, iterating over a query that returns data from a database more than once should work, however there may be certain scenarios where subsequent iterations may be prevented to prevent unexpected modification of a database (would depend...not sure of an example scenario right now that might cause that to happen).
-
[Message Deleted] -
to execute SQL queries after a specific time intervalI would look into writing a windows service that runs in the background to perform such a job, rather than trying to write it as part of a web site. A service is autonomous, and can be guaranteed to be executing so your job will run on the appropriate schedule. Its difficult, if not impossible, to make such a guarantee from a job triggered by an ASP.NET process. Just a word of warning...executing a query every 5 seconds against a database could cause problems with performance and load. It might be better to reevaluate why you need such a job, and perhapse approach it in a different way (one that does not involve executing queries against an access database every 5 seconds). A windows service could still provide a host for your job.
-
C# alias#defines as in...macros?