Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
Z

Zot Williams

@Zot Williams
About
Posts
11
Topics
1
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Am I wrong?
    Z Zot Williams

    It all depends on what you are trying to learn and what level you're starting from. C isn't a great language for absolute beginners because it has quite a weird and confusing syntax - there are more instructional languages to teach the basic concepts of programming and ease someone into it. However, if you understand the basics and want to get to grip with lower level things, then C is at the simpler end of the C-style-language spectrum and it does allow access to very low level stuff that is gradually hidden away as you move to the C# and Java end of things. But if you really want to go low level, then you need to write some assembler, so that when you write your high level code you understand what it is the computer is actually doing, and why. If you want to learn to be a better programmer, then there is no "one" language. Learn at least a bit about as many as you can. Functional programming is a good example of languages that will make you think about the same problem in a very different way, and when you come back to a procedural language you'll write better code because you have a wider view (lateral thinking) of how the problem can be approached. Lastly, I find the best way to learn is to "just do it". Don't read or re-use someone else's solution, but actually sit down and write the whole program yourself. Want to read an XML file? Then write a simple XML parser. The next time you use an XML parser from a library you'll understand what it has to do and why it's so slow. You'll know how you can re-phrase your XML data layout to make the files faster to load, smaller to transfer, and easier to manipulate. As well as this, if you do it, you'll remember it; if you read it a lot of it may just fade away, unused.

    The Lounge c++ java delphi data-structures tutorial

  • Do you wear out your keyboards?
    Z Zot Williams

    I don't beat them to death, but I've worn away many a letter and actually wore my way through a ctrl key to the point where a hole developed in the surface. Luckily the key tops are easily replaced:-)

    The Lounge question

  • Do you think math people are the best programmers?
    Z Zot Williams

    Quite simply, the best programmers are programmers. I've *never* worked with a mathematician who was a great programmer, but I have met plenty of programmers who were great at maths. If people are primarily interested in programming, that's what makes them good. If they are primarily interested in Maths or Engineering or Art, then they will be (by definition) less interested in programming and therefore less good at it than they could be. Historically computing was used primarily for mathematical problem solving, so of course it involved a lot of maths and mathematicians. The names you quote are ancient history in the computing world. But we are increasingly standing on the shoulders of giants. We don't need to be able to calculate a fourier transform on paper in order to be capable of *using* an FFT in our programs. We can write a physics-based game with no ability to do physics calulations on paper. We can produce a website with complex financial graphs without having a degree in statistics. We have tools (libraries), and we need only understand how and when to use them, not how to build them from scratch.

    The Lounge question discussion

  • SSD's, what's the latest word?
    Z Zot Williams

    Not finding much info on SSDs with respect to developers, I did quite a few timing tests when I got my drive. I got the SSD (80GB) for high read/write speed and zero seek times rather than size - it's a cache for the things that will make the most difference, with HDD being perfectly fast enough for all the big data files I use (seeking is really what cripples HDDs, sequential access is fast). I split it into 2 partitions: C: for the OS/pagefile and most used apps, and D: for my code. (I've only put data on the SSD that I can reinstall or recover from Source Control if the SSD fails). All the other "non-backed-up" data is on my old HDD, and the entire SSD is backed up onto the HDD once a week. If the SSD fails, I can just dual-boot back onto my HDD and be up and working in about half an hour (just get and rebuild the code). With the SSD, install times are much better. Sure, you don't install often (apart from endless @#!$%* Adobe bug/security updates), but it's so much better when you do. Startup times are also significantly improved. Boot time (cold start to having solutions open and ready to work in 2 instances of visual studio) went from 7 minutes to 18 seconds! Visual Studio 2010 startup time dropped from around 10 seconds to about 2-3s. Shutting down dropped from 40s to 11s. With the disk caching in Win7, warm-boot times for apps are much less of a problem anyway but it's still a few seconds faster with the SSD than a cached HDD. Installs/Startup are nice, but how does it help with the minute-by-minute tasks of developing? * Apps are all slicker - lots of little things just happen noticeably faster, even things that I thought would be server-bound such as populating the TFS Team Explorer window - much more pleasant to use. * The time taken to compile our code dropped by 25% (16 minutes down to 12). Building a single-line change and running our app (to a point where I can start debugging) dropped from 59s to 42s. A small saving but it happens so frequently, and that 17s was "dead time". * The big win is searching the codebase for something (which I do surprisingly often - usually several times a day). This used to take minutes and now takes seconds. These time savings mainly reduce frustration/tedium but they shave around 25% off all the delays in the day - the ones that are so short that you can't switch to another task, so you don't do anything but simply wait. I conservatively calculated break-even point on the cost of the drive at about 2-3 months. Interestingly, there is now very litt

    The Lounge csharp html asp-net database visual-studio

  • Documentation: link from The Insider
    Z Zot Williams

    You're missing the point. Tools like GhostDoc or Atomineer can save a lot of time doing the most dull and repetitive parts of documentation such as keeping the docs in sync with changes to the code - time that you can spend on better documentation or on writing more code. Auto-generated documentation can't tell you more than the code already "says" - but it can summarise the key points in a more readable form and save a lot of time writing and updating comments. If you don't like the text that is provided, then you're missing the point - that text appeared instantly, and can be incorporated into your own descriptiosn to save you bucketloads of time. These addins are tools, and if used correctly, they can save you thousands of dollars of time a year. To answer the OP, I usually write a description of my class/method and then fill in the implementation to match that "micro design". This is a quick approach that really helps to iron out my assumptions and design ideas by thinking about the code before I write it. And then it acts as documentation for any reader to help them understand the code later. I simply can't write code without writing down an overview of what it will do first. It's much faster overall because I make far fewer mistakes and produce far fewer bugs.

    The Lounge visual-studio com xml help tutorial

  • Calling functions from Events
    Z Zot Williams

    We have a policy for our team: Event handlers should (almost) always be trivial implementations, and delegate anything non-trivial to a different method and/or thread. This isn't just about separating UI from business logic, although that in itself is a good enough argument. Any code, not just UI, can raise events. It's a great way of dynamically coupling systems together. The risk in this is that the sender has no idea about who might subscribe to the event in the future, or what they may need to do. How can you design the sending code when you don't know (a) how long it will take before the event handlers return, (b) what the event handlers will do, (c) what program state they might change, (d) what calls they might make back to the sender to get a bit more information, (e) what threading implications there may be, (f) what exceptions it might throw?? Imagine you raise an event on a worker thread. Some UI code subscribes to this event. It needs to update a button, so it Invokes across to the UI thread. The worker may now be blocked for a long time (so performance could suffer) or possibly even deadlocked by the UI thread. This is very difficult to design for unless you follow some best-practices in your event handlers. Another common problem is when a system responds to an event, and it changes its own state. When its state is changed, it of course raises its own events to allow other systems to react to those specific changes. Pretty soon we wind up with a cascading event that crushes performance, or possibly even reentrancy problems. An issue that often occurs when multiple listeners subscribe to an event is that it is hard to control the order in which the even handlers are called, so they can begin to have side effects on each other (and worse, the side effects change if the order of calls changes). A way of mitigating this is to minimise the work done in event handlers - the more complex they are the higher the risks. Another common problem with events is that if we react to them immediately, we can end up doing a lot of unnecessary repetitive work. Imagine if we raise an event every time our document changes, and the UI reacts to this to redraw a window every time the event fires. Seems logical. But then we load the document from disk and as the structure is built piece by piece, the event is fired 300 times, causing 300 redraws of the window (result: extremely slow and flickery loading). So then you add a bodge to suppress those updates during loading. And then you find a similar prob

    The Lounge csharp question

  • A piece of criticism on "to go fast, do less"
    Z Zot Williams

    Your arguments don't really prove the statement to be false. Sure, we can finish a task earlier ("faster") by splitting the work to be done over many ALUs or Cores. But this actually means each pathway/core is doing less work. It's simple division of labour. (Of course, making use of parallel pathways usually incurs overheads, so strictly speaking the code is actually running "slower", but completing earlier due to parallelism) In the case of caches, what can do to reduce misses? 1) Organise your data access patterns to make more efficient use of cache coherence. In other words, optimise your code so the cache/bus/memory systems have less work to do. 2) Reduce the size of the data being cached so more fits into a line of cache, thus making the cache more effective. So doing/using less to achieve the same task. As for "used to be true", none of what you have proposed is new! Parallel Processing has been used for years (A top example is Henry Ford using it to revolutionise the car manufacturing process, but of course 'many hands makes light work' is an ancient concept). Or if you want to be a bit more geeky, optimising cache/memory access has many similarities to optimising programs on Drum Memory (see Wikipedia). Different technology, same underlying problem.

    The Lounge css data-structures architecture tutorial learning

  • Article etiquette
    Z Zot Williams

    Thanks Hans. I'd enjoy writing some articles - but I have a 3 year old and an 8-month old, so I'm a bit pressed for time right now! :-) I do intend to start posting articles one day though. Regards, Jason

    Site Bugs / Suggestions

  • Article etiquette
    Z Zot Williams

    Cool. Thanks Hans. :thumbsup: I'll post details in a catalog item, but the quick answer is: http://www.atomineer.com/AtomineerUtils.html :-)

    Site Bugs / Suggestions

  • Article etiquette
    Z Zot Williams

    I think my addin would be of great use to a lot of CodeProject users, so I'm keen to share it, but (while I'm happy to write some other articles for CP one day when my kids allow me enough spare time!) I don't feel happy with releasing the source for this particular project. I'll just have to rely on the grapevine! :-D Cheers

    Site Bugs / Suggestions

  • Article etiquette
    Z Zot Williams

    Hi, I've released a useful and free programming add-in for Visual Studio. I'm not willing to release the source code for it, so it's only available as a binary download. I'd like to let CodeProject users know about it, but I don't know if it would be considered bad etiquette or against the CodeProject rules for me to post a message/article as an "advert" for my addin. Can someone please advise me if this would be ok? Cheers, Jason

    Site Bugs / Suggestions
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups