Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Algorithms
  4. Pair wise testing with QICT Knuth shuffle algorithm

Pair wise testing with QICT Knuth shuffle algorithm

Scheduled Pinned Locked Moved Algorithms
algorithmscsharpperlcomdata-structures
5 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Offline
    L Offline
    Lost User
    wrote on last edited by
    #1

    I am engaged in research and development of a new article in a series of articles on algorithms. I plant to use the Big ‘O’ Algorithm Analyzer for .NET featured in my article here at The Code Project. Big O Algorithm Analyzer for .NET I recently read the MSDN Article by Dr. James McCaffrey on the QICT pair wise testing utility he wrote for. NET. In his article he encourages the reader to try to use different algorithms for weighting the pairs found by the algorithms using in the tool. I thought this would be a great application to put under test of the Big ‘O’ tool to demonstrate how to instrument code in an application and additionally show some methods of improving the algorithms for given data sets. Dr. James’ article can be found here: Pairwise Testing with QICT I would like to ask any one who would like to write a different algorithm as described in the article to post it here or discuss different approaches which seem like they would be an asset to the existing code base. I will also demonstrate the difference of a good Knuth shuffle and a bad on assume the Perl code below: Bad Knuth:

    sub shuffle {
    my @array = @_;
    my $length = scalar(@array);

    for (my $i = 0; $i < $length; $i++) {
        my $r = int(rand() \* $length);
        @array\[$i, $r\] = @array\[$r, $i\];
    }
    
    return @array;
    

    }

    Good Knuth:

    sub shuffle {
    my @array = @_;
    my $length = scalar(@array);

    for (my $i = $length - 1; $i > 0; $i--) {
        my $r = int(rand() \* ($i + 1));
        @array\[$i, $r\] = @array\[$r, $i\];
    }
    
    return @array;
    

    }

    L 1 Reply Last reply
    0
    • L Lost User

      I am engaged in research and development of a new article in a series of articles on algorithms. I plant to use the Big ‘O’ Algorithm Analyzer for .NET featured in my article here at The Code Project. Big O Algorithm Analyzer for .NET I recently read the MSDN Article by Dr. James McCaffrey on the QICT pair wise testing utility he wrote for. NET. In his article he encourages the reader to try to use different algorithms for weighting the pairs found by the algorithms using in the tool. I thought this would be a great application to put under test of the Big ‘O’ tool to demonstrate how to instrument code in an application and additionally show some methods of improving the algorithms for given data sets. Dr. James’ article can be found here: Pairwise Testing with QICT I would like to ask any one who would like to write a different algorithm as described in the article to post it here or discuss different approaches which seem like they would be an asset to the existing code base. I will also demonstrate the difference of a good Knuth shuffle and a bad on assume the Perl code below: Bad Knuth:

      sub shuffle {
      my @array = @_;
      my $length = scalar(@array);

      for (my $i = 0; $i < $length; $i++) {
          my $r = int(rand() \* $length);
          @array\[$i, $r\] = @array\[$r, $i\];
      }
      
      return @array;
      

      }

      Good Knuth:

      sub shuffle {
      my @array = @_;
      my $length = scalar(@array);

      for (my $i = $length - 1; $i > 0; $i--) {
          my $r = int(rand() \* ($i + 1));
          @array\[$i, $r\] = @array\[$r, $i\];
      }
      
      return @array;
      

      }

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #2

      Okay I'm stumped. Would anyone like to tell me why this is a bad question? I'm not trolling my article. I am trying to get ideas to better solve the sorting/complexity algorithm as posed in MSDN. I got down voted twice on this post. I'd like to know why so in the future I don't make the same mistake. :sigh:

      L 1 Reply Last reply
      0
      • L Lost User

        Okay I'm stumped. Would anyone like to tell me why this is a bad question? I'm not trolling my article. I am trying to get ideas to better solve the sorting/complexity algorithm as posed in MSDN. I got down voted twice on this post. I'd like to know why so in the future I don't make the same mistake. :sigh:

        L Offline
        L Offline
        Lost User
        wrote on last edited by
        #3

        As an independent observer, i.e. I know nothing about the QICT Knuth shuffle algorithm, I am as stumped as you are. Your previous post seems quite reasonable to me, so I can only assume the downvoting was by one or more of the many people who seem to stalk the forums downvoting for no reason. I have noticed certain people automatically downvote articles without giving a reason; it's just something we have to live with. I think a question was raised as to tracking these types but Chris commented that it would be too much effort for little gain. I suggest you go ahead with your article and see what happens. [edit]I gave you a 5 to try and even the score, let's hope someone else follows suit.[/edit]

        L 1 Reply Last reply
        0
        • L Lost User

          As an independent observer, i.e. I know nothing about the QICT Knuth shuffle algorithm, I am as stumped as you are. Your previous post seems quite reasonable to me, so I can only assume the downvoting was by one or more of the many people who seem to stalk the forums downvoting for no reason. I have noticed certain people automatically downvote articles without giving a reason; it's just something we have to live with. I think a question was raised as to tracking these types but Chris commented that it would be too much effort for little gain. I suggest you go ahead with your article and see what happens. [edit]I gave you a 5 to try and even the score, let's hope someone else follows suit.[/edit]

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #4

          Thanks... Maybe down voting should require a comment w/ option to make it anonymous to protect the innocent... I will write it up as a suggestion. I will also down load the code and place the exact algorithm in the thread to better illustrate the problem. :cool:

          L 1 Reply Last reply
          0
          • L Lost User

            Thanks... Maybe down voting should require a comment w/ option to make it anonymous to protect the innocent... I will write it up as a suggestion. I will also down load the code and place the exact algorithm in the thread to better illustrate the problem. :cool:

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #5

            I don't see why it should be anonymous. If you think something is not up to standard then you should have the courage to put your name and comments behind it.

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups