Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
K

Kimberley Barrass

@Kimberley Barrass
About
Posts
7
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • What's the optimum number/combination of screens?
    K Kimberley Barrass

    I recommend three always. Not so much for work, but because flight and racing sim's are SO much better with the extra FOV. If it wasn't for them, I like to work with two 22" screens, one rotatable set mainly to portrait and one landscape. Portrait mode is surprisingly good for code work, and is much, much better for doc reading. When not simming, the portrait monitor is precariously blanaced above the landscape monitor as I rarely use them both at once in work mode, so moving my head to focus on the other monitor acts like a trigger that my focus is shifting. I spent a long while playing around though, and would say what's best for me is no good for other folks in the office or at home, so it's definitely down to personal preference.

    The Lounge question career

  • Best way to describe a threaded application ? (diagrams)
    K Kimberley Barrass

    Well, sequence diagrams as a part of your overall documentation set are fine, but with anything operating over multiple threads in a modern microcomputerarchitecture where threads can be allocated individual cores, what I think you are looking for is something a little more focussed. If you are using your program (User threading) alone to control threading within a single process, then I would develop a set of livelock and deadlock scenarios which could occur within your process using some kind of state diagram/table. Deadlock in particular is easily avoided with well formed state transitions when you are using non-cpu based threads (All IO is effectively synchronous and the threading is in fact just time-splicing rather than anything else). When using true threads outside of the control of your own code, then consider either: a:) Wrapping a thread controller class around your running threads. The controller has a forward running check which can effectively interrupt a thread if no forward progress is being made on a thread. With this option, class, sequence and interaction diagrams along with state diagrams suffice, paying attention to the new class/(es), but it is necessary to make sure that this thread itself is not locked. If possible fork it into a new process space, possibly with new affinity. This will also alter your exception heirarchy and will require thought as to how to deal with refusals or returns by the thread controller. (Highly recommended in complex scenarios, particularly with high, precise I/O required, but where loss is acceptable (Computer games, etc)) b:) Isolate the most likely blocking activity and thread out these activities in a class/(es) which allows buffering and non blocking messaging into the class. This can be done in a loss-less manner, but can be slow. Again, the class, sequence and interaction diagrams along with state diagrams are all that are required for this mechanism. All of these are standard diagrams and can easily be done in Visio, but the most most important thing is to understand what your threads are doing, and to anticipate the exception scenarios. In particular, if you are considering terminating a thread from a controller, then be wary of leaving library state changed, or heap memory un-released. Good luck with your endeavours anyway.

    The Lounge design question

  • OO-DBMS
    K Kimberley Barrass

    I have been asked this question before, but fail to see why it continuously raises it's (very) ugly head. Relational database technology is the product of over seventy years (at least) of structured algorithmically sound research into holding human collected data. The mechanisms and languages which have evolved as part of that research are incredibly sound, and offer a fantastic, intuitive, and performant mechanism for handling human collected data sets. For that reason, a traditional RDBMS is the best product to deliver a data handling solution when dealing with human produced data. Object Orientation, as a programming paradigm is probably 20 years younger than relational DB's, and came about through the need to manage complex heirarchies in a manner which allowed the human mind to manage the complexities involved. The use of expanded types allowing real world readability to come into a machine parsed language allows developers to be much more productive. The study of patterns and frameworks show massive levels of improvement (in both creation and use) in OO and type-expandable languages, to the point where non-OO languages can struggle to even implement some types of framework. To a current developer who understands both paradigms well, the OO-DBMS may make sense as a route to allow a seamless integration of data with the programming above it, but the fact that both of these fields have progressed to where they are and arrived at very beneficial and computable outcomes means I am obliged to defend their seperation as beneficial, despite the fact that the integration code between well formed OO business and integration tiers and relational data is at best inelegant, and at worst downright counter productive. Having said all of that, I use embedded ODBMS for all internal or computer generated data (object serialisation, marshalling data, etc.) where I can, as it often struggles to be mapped to relational data pleasantly, whereas I've found that relational modelling is much better (quicker and more easily complete) for human produced data.

    The Lounge question

  • Windows 8 and the split personality Metro interface
    K Kimberley Barrass

    I think I am in the minority of 1 when I say this (outside of MS at least) but I think MS have definitely made the right choice here. W.I.M.P; thirty+ years in the making has been honed and extended universally, and computing has moved from the domain of scientists, through experts, to the realm of an everyday consumer device, and now W.I.M.P has been found wanting. This is mainly due to form factor, mobility, and ease of use, and so we are left with only two choices for the next thirty+ years. Either move all of our interfaces forward to make a unified interface, or keep separate the two interfaces, and as a manufacturer of software (and now hardware) for a variety of form factors, then it seems to me that the best way forward is a single unified interface, or one which, like Metro, promotes a main interface with a secondary interface for the expert and domain specific users. This isn't a new phenomenon, as, even with W.I.M.P. the CLI has continued to exist for a variety of expert uses, and is available from most W.I.M.P. implementations. I think it is a fair assumption that we will have a touch or gesture interface with single-app full screen coverage as the primary interface for all devices in the future, with access to a mature W.I.M.P. interface supporting multiple APPs/windows for professional and workflow usage. Beneath this we will continue to have CLI access and tooling. MS then, are the only company so far to give it a go, and that's got to put them in a position to drive forwards this over the next few years, so kudos to them, even if their first implementation leaves something to be desired.

    The Lounge design c++ mobile architecture help

  • Why I like C++
    K Kimberley Barrass

    For Martin Cheng: I would like to add that Points 2,3,4,5 are very, very important if you really want to get the most out of C++ and not fail later down the line. All of the points mentioned are aspects of "object oriented" analysis/design/programming and I would recommend learning this in parellel with C++ from good, well written, and proven sources. Booch, Yourdon, etc. and will almost certainly mean learning UML if you do not already know it depending on what books you pick up. A good design book, & the "patterns" books by the gang of four and martin fowler, are especially handy to understand heirarchies and what C++ gives you above C, but be prepared to criticise and ignore them completely in practice if need be. Finally, try desperately to get "real world" problems and practice, practice, practice. Small things like sorting algorithms, queueing mechanisms, even simple mutex controls, up to practical but low level tasks like neural net engines, file format conversion engines, etc (Concentrating on the core objects underneath the guis/cli). before you know it you will feel that nothing is impossible and nothing outside of your ability... It's a great language. go for it....

    The Lounge csharp c++ com question

  • Why I like C++
    K Kimberley Barrass

    I earn my money, or rather used to before becoming management, in Java/.Net/custom script toolsets for bespoke hardware and a little C occasionally, but despite never having touched it professionally, I find it the best language to satisfy my need as a programmer to learn, study and understand computing, and I find it fortifies my knowledge when dealing with all other areas of computing. I find it encourages polyglotism, as an anchor into C functions as well as all OOP languages (in one form or another). I also find that the literature (particularly historic) is more scientific, and less "wishy-washy" than other languages, and I prefer to learn in this manner. None of this, java in 21 days, rubbish, but academicly useful books at a time when the OOP paradigm was being invented as a useful and functioning mechanism for complex solutions (And along with the seminal C book, the best language reference book). Partly because of that, C++ is the best way (IMO) to learn frameworks, client-server, windows (and the not-windows full-screen shenanigans of today) managers, resource-sharing, memory management issues, and so, so much more. I would, and do, recommend that everyone learns C++ (And high and low level OOAD as a parellel task) even if they don't intend to use it in their day to day activity.

    The Lounge csharp c++ com question

  • Non-programming question about Java...
    K Kimberley Barrass

    Personally speaking, I find it a fine and extensible language. Fairly elegant, and with MASSES of freely available code available means it is very easy to learn, and use to actually do something useful. BUT, outside of the language itself, I find it to be despicable to actually deploy and use in an enterprise environment. The tuning is dire, permgen growth is unacceptable, and I much, much prefer the .net framework. If it wasn't for the fact that it is the easiest way to program on posix based systems I probably wouldn't use it at all...

    The Lounge question csharp java learning
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups