Sometimes the bug in the bug will cancel each other. I wonder if they will improve human software so that all these lengthy nightly reboots can be avoided :)
YDaoust
Posts
-
Electronic Consciousness? -
Electronic Consciousness?I see no reason why it wouldn't be possible. After all, the human brain is nothing but a big processor that obeys the laws of physics. Man-made systems with similar capabilities of cognition, affectivity, introspection... should be able to support consciousness. Wikipedia supplies interesting material on this topic (https://en.wikipedia.org/wiki/Mind_uploading[^]). Moving consciousness from one being to another is something we (I) don't understand at the moment, and it seems to raise paradoxical situations.
-
for(int i=0; i<size; i++)probably i -> row and j -> column then
-
for(int i=0; i<size; i++)Don't they mean 'horizontal' and 'vertical' ?
-
for(int i=0; i<size; i++)You can use it undercover by means of a macro
#define i index
;-) -
Obscure C++ Features [from the newsletter]I was quite delighted to discover among the alternate operator tokens keywords for &&, || and !. I never really liked these "rude" logical operators and this gave me a sufficient incentive to switch to the nicer litteral representation. Isn't that beautiful ?
if (i < N and not Odd(A[i]))
I will not embrace the tokens for &, | and ~, as these correspond to bitwise operators which are more arithmetic in essence and compare to the usual +, -, *, /. Even less the _eq forms, which in my opinion are misnomers: f.i. or_eq should read bitor_eq.
-
for(int i=0; i<size; i++)I tend to use
Index
instead ofi
(I definitely like verbose identifiers). And when I need several indexes, like in nested loops, I naturally call themJndex
,Kndex
,Lndex
... (not kidding). I have not yet come to the point that I useNumber
andMumber
for counts. -
Logical thinking could get very complicatedHow can you be sure that both procedures are doing the same? I mean in all cases, not just on a few test cases.
-
Documentation: link from The InsiderI tend to write the comments right before I code. The most effective technique being to describe the skeleton of the processing. In plain English, not pseudocode. I try to stick to one comment per three or four lines of code.
-
Fast path finding algorithmStore your maze as a 2D image (I bet it already is). Paint the walls in black, the starting pixel in red, the ending pixel in green and all others in white. Keep an "active list" with the (coordinates of) pixels that can be reached from the starting point in at most N steps. ---- Initialize the active list with the ending pixel (N=0). ---- For increasing N, scan the active list: -------- For every pixel in the list: ------------ Consider its white neighbors, if any ------------ Paint them in navy, ecru, wheat or salmon, depending on the neighbor-to-pixel direction ------------ Replace the pixel by its neighbors in the list ---- Stop as soon as you meet the red pixel. The pixel colors will allow you to trace the path. Properly implemented with arrays using a compiled language, this will run in about 100 ms or less for a 800x800 image.
-
Polygons from PointsOoops, did I say C1 continuty ? No, it's only C0. (C1 is achievable by using a more elaborate interpolation scheme inside the stiches, giving you smooth curves.)
-
General purpose Text editor or IDEVisual Studio 101% of the time. For some special edits (in particular removing empty lines): MS Word !
-
Should Devs know how maths works?Danny, I see two circumstances when programmers need to be aware of the inner workings of computer arithmetic: - when they face the limitations of the finite representation (know about overflow and inaccuracies resulting from trunction issues), - when they need to use optimization "tricks" related to the specifics of the representation (such as trading a shift for a division by a power of 2). Besides that, being cultured never harms, does it ? For the brave ones: http://www.validlab.com/goldberg/paper.pdf
-
Polygons from PointsAh, this is quite different from what I had imagined (I thought of much sparser points irregularly arranged) ! There is a much simpler way then. Consider every rectangular stich (or are they quasi-rectangular ?) in the grid in space and "cut" it with a plane at altitude 20 (say). You test every edge of the stich for intersection with this plane, just by detecting a change of sign in the Z coordinate mins 20. Then linear interpolation gives you the X and Y coordinates of the intersection point. The Z test is such that you'll get an even number of intersections. When you have two of them, join them with a line segment; when you have four of them, join with two line segments using a nearest neighbor rule. After doing that, you will obtain a nice polygonal decomposition of your domain. In reality, a C1 continuous approximation of the level curves. This takes two hours to implement. If you need polylines rather than isolated line segments, it is possible (and not so difficult) to follow paths in the grid: every time you find an intersection in a stich, it is joined to a second intersection point in the same stich; this other intersection point also belongs to the neighboring stich, and so on, and so on. Is that clear ? If you need more details, please ask me.
-
Real time peak identificationI think so too. Your signal is clean, you should get good results. The nicety with the approach is that you work on the signal, not its derivative. The threshold value can be set to a small multiple of the noise amplitude.
-
Real time peak identificationWjousts, your signal indeed looks nice. If it was perfect, all you would have to do is indeed look for changes in the sign of the derivative, i.e. detect monotonous sequences (increasing then decreasing, in alternation). Added noise causes local perturbations of this dream situation, resulting in small breaches of the monotonocity. The closer you get to the peak, the more likely they get (as the derivative gets smaller and smaller). My way to deal with that is to consider "quasi-increasing sequences" (resp. decreasing), i.e. values that go increasing but allow a backtracking limited by a threshold value. For example, assuming a threshold of 10, the following sequence is quasi-increasing: 0, 22, 31, 27, 45, 63..., while this one is not: 0, 22, 31, 20, 45, 63... (said differently, it stops being quasi-increasing at the value 20). This approach is more robust than mere derivative computation and you can also filter on the total increase (decrease) of the sequence. You can also limit the length of the allowed backtracking to a given number of samples. When will it detect a peak ? When the signal value has decreased to the peak value minus the threshold. I guess that in any case you cannot avoid having to wait some times after a peak, to be sure it is a true one.
-
Polygons from PointsKyudos, your problem statement is not totally clear to me. A figure would be helpful. You can form polygons around your point clouds by means of the convex hull construction (tightest convex polygon that includes all points); if you mention unconnected islands, this means that convexity is a too strong requirement. You could resort to Alpha-shapes, a generalization of the convex hull (see http://cgm.cs.mcgill.ca/~godfried/teaching/projects97/belair/alpha.html). This suggestion only allows you to process each class independently, it will not guarantee that the polygons are nested. This requires a more general approach. Another answer is provided by the Voronoi diagram, a tiling of the plane where every point is associated to all points that are closer to it than to another point. (http://en.wikipedia.org/wiki/Voronoi\_diagram). Coloring the diagram will give you the desired polygon set (in reality some discrete form of iso-lines). Maybe the specific arrangement of your data points can provide shortcuts to the solution...