Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
CODE PROJECT For Those Who Code
  • Home
  • Articles
  • FAQ
Community
  1. Home
  2. General Programming
  3. C#
  4. Feed forward Neural network (back propagation) code classification problem [modified]

Feed forward Neural network (back propagation) code classification problem [modified]

Scheduled Pinned Locked Moved C#
csharpsysadminhelptutorial
12 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • K karayel_kara

    Hi, Everbody. i have writen c# Artifical intelliget Neural Network. I have 500 count dataset which is normalized -1 and 1 and.i have used learning rate and momentum coefficient . i teaching my network with 400 count dataset. Network have 5 inputs and 5 outputs. my output is setuped as 00001/00010/00100/01000/10000 when my network teched with 10000 iterations and 400 datasets, my output have a problem. my network teached in last class of 10000 so when i tested my datasets , output only have been predicted as true class of 10000. as result my code can only learn last class and other classes can not predicted or poor predicected Example Test and Learning dataset 5 neuron input {{0.04418 ,0.18135 ,0.28143 ,0.34367 ,0.14938 ,}, {0.37418 ,0.35050 ,0.18184 ,0.07493 ,0.01855 ,}, {0.04540 ,0.17061 ,0.24506 ,0.32560 ,0.21333 ,}, {0.08494 ,0.25653 ,0.26922 ,0.26141 ,0.12790 ,}, {0.18135 ,0.36832 ,0.24799 ,0.15743 ,0.04491 ,}, {0.11887 ,0.29143 ,0.27581 ,0.22870 ,0.08518 ,}, {0.02758 ,0.26239 ,0.37247 ,0.29998 ,0.03759 ,}, {0.00781 ,0.15792 ,0.32145 ,0.41811 ,0.09470 ,}, {0.07811 ,0.31608 ,0.31901 ,0.24506 ,0.04174 ,}, {0.07664 ,0.32390 ,0.32829 ,0.21674 ,0.05443 ,}, .... 5 neuron output { {1 ,0 ,0 ,0 ,0 }, {1 ,0 ,0 ,0 ,0 }, {0 ,1 ,0 ,0 ,0 }, {0 ,1 ,0 ,0 ,0 }, {0 ,1 ,0 ,0 ,0 }, {0 ,0 ,1 ,0 ,0 }, {0 ,0 ,1 ,0 ,0 }, {0 ,0 ,0 ,1 ,0 }, {0 ,0 ,0 ,1 ,0 }, {0 ,0 ,0 ,0 ,1 }, {0 ,0 ,0 ,0 ,1 } ..... My trainin code blok is as folowing show

    int iterasyon=100000,it=0;
    while (it = 0.00001)//iteration start section
    {
    mmse = 0;
    mae = 0;
    for (int nf = 0; nf < 400; nf++)//dataset count*********
    {
    mmse += hesaplas(ndata, nf);
    //mae += smean_abs_err;
    trains(nf);

                }//nf
    
                mmse = 0.5 \* (mmse / 400 \* 5));
              
    
                if (it % 100 == 0)
                {
                    yazs.WriteLine(mmse.ToString());
                    yazs.Flush();
                }
                it++;
    
               
    
            }//it
    

    modified on Tuesday, August 3, 2010 9:30 PM

    K Offline
    K Offline
    Keith Barrow
    wrote on last edited by
    #3

    Three things (over Original Griffs Reply) 1. Is there any correlation beween the input an the output (I cannot see any)? if not, the NN is not likely to find it. 2. How are you training the network, your code is hard to read. If you are repeating outputs 1,0,0,0,0 (400/10000?) times at the end of trainging, you are probably annealing the network to give this result no matter what. You should cycle through many outputs. 3. It is possible to overtrain a NN! I'd start with something simpler to test you NN code: 1 Input 0 -->360 , and train output to sin 1 Input 0 -->360 , and train output to x^2 2 Inputs, train to Logical Opertors (remember the XOR problem) 3 Inputs, result = i1 * i2 * i3 Then test on a real dataset.

    ragnaroknrol The Internet is For Porn[^]
    Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

    K 1 Reply Last reply
    0
    • K Keith Barrow

      Three things (over Original Griffs Reply) 1. Is there any correlation beween the input an the output (I cannot see any)? if not, the NN is not likely to find it. 2. How are you training the network, your code is hard to read. If you are repeating outputs 1,0,0,0,0 (400/10000?) times at the end of trainging, you are probably annealing the network to give this result no matter what. You should cycle through many outputs. 3. It is possible to overtrain a NN! I'd start with something simpler to test you NN code: 1 Input 0 -->360 , and train output to sin 1 Input 0 -->360 , and train output to x^2 2 Inputs, train to Logical Opertors (remember the XOR problem) 3 Inputs, result = i1 * i2 * i3 Then test on a real dataset.

      ragnaroknrol The Internet is For Porn[^]
      Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

      K Offline
      K Offline
      karayel_kara
      wrote on last edited by
      #4

      Firstly Thanks interest. my dataset is folowing is that:

      data is 5x400 matrix
      b_cikis is 5x400 matrix

      [...
      long data removed
      ...]

      int iterasyon=100000,it=0;
      while (it = 0.00001)//iteration start section
      {
      mmse = 0;
      mae = 0;
      for (int nf = 0; nf < 400; nf++)//dataset count*********
      {
      mmse += hesaplas(data, nf);
      //mae += smean_abs_err;
      trains(nf);

                  }//nf
      
                  mmse = 0.5 \* (mmse / 400 \* 5));
                
      
                  if (it % 100 == 0)
                  {
                      yazs.WriteLine(mmse.ToString());
                      yazs.Flush();
                  }
                  it++;
      
                 
      
              }//it
      

      modified on Wednesday, August 4, 2010 10:25 AM

      K 1 Reply Last reply
      0
      • K karayel_kara

        Firstly Thanks interest. my dataset is folowing is that:

        data is 5x400 matrix
        b_cikis is 5x400 matrix

        [...
        long data removed
        ...]

        int iterasyon=100000,it=0;
        while (it = 0.00001)//iteration start section
        {
        mmse = 0;
        mae = 0;
        for (int nf = 0; nf < 400; nf++)//dataset count*********
        {
        mmse += hesaplas(data, nf);
        //mae += smean_abs_err;
        trains(nf);

                    }//nf
        
                    mmse = 0.5 \* (mmse / 400 \* 5));
                  
        
                    if (it % 100 == 0)
                    {
                        yazs.WriteLine(mmse.ToString());
                        yazs.Flush();
                    }
                    it++;
        
                   
        
                }//it
        

        modified on Wednesday, August 4, 2010 10:25 AM

        K Offline
        K Offline
        Keith Barrow
        wrote on last edited by
        #5

        One of the admins has trimmed your message, the actual data is irrelevant and it was taking a long time to open your message. I only want clarifiaction on two things, not the actual test data: 1. What does the input vector represent? What is its relationship to the output vector? The input looks like a bunch of random numbers to me. 2. Are you iterating 100000times over each input/output pair OR Are you iterating over each pair of items in the input/ output arrays then doing this 100000 times.

        ragnaroknrol The Internet is For Porn[^]
        Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

        modified on Thursday, August 5, 2010 3:10 AM

        K 1 Reply Last reply
        0
        • K Keith Barrow

          One of the admins has trimmed your message, the actual data is irrelevant and it was taking a long time to open your message. I only want clarifiaction on two things, not the actual test data: 1. What does the input vector represent? What is its relationship to the output vector? The input looks like a bunch of random numbers to me. 2. Are you iterating 100000times over each input/output pair OR Are you iterating over each pair of items in the input/ output arrays then doing this 100000 times.

          ragnaroknrol The Internet is For Porn[^]
          Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

          modified on Thursday, August 5, 2010 3:10 AM

          K Offline
          K Offline
          karayel_kara
          wrote on last edited by
          #6

          Hi, that is my actual daya set and i took my friend. Datas is measured patient at hospital. As a result Datas is Biological datas, My Friend clasified nntool in matlab. he used difference learnin method which is Levenberg-Marquart, i used gradient desent with momentum algortihm(gdm). so i hope i will classify with my soft. But test results is not good. 2.Yes I train each pair in input/output arrays in each one iteration

          for i:0:i<100000;i++// iteration
          for j:0:j<400;j++ // input/output array
          {

          calculate error;

            if error>0.000001
             { train network;
              update weigths;
             }
              else
              break;
          
          
           }
          
          K 1 Reply Last reply
          0
          • K karayel_kara

            Hi, that is my actual daya set and i took my friend. Datas is measured patient at hospital. As a result Datas is Biological datas, My Friend clasified nntool in matlab. he used difference learnin method which is Levenberg-Marquart, i used gradient desent with momentum algortihm(gdm). so i hope i will classify with my soft. But test results is not good. 2.Yes I train each pair in input/output arrays in each one iteration

            for i:0:i<100000;i++// iteration
            for j:0:j<400;j++ // input/output array
            {

            calculate error;

              if error>0.000001
               { train network;
                update weigths;
               }
                else
                break;
            
            
             }
            
            K Offline
            K Offline
            Keith Barrow
            wrote on last edited by
            #7

            OK the first thing we can rule out tis training to one input repeatedly at the end, which is a pity as it's the easiest fix :-). I think I understand the problem domain now, You are trying to categorise a function given 5 input points. The first thing is, you do have more than one layer between the input layer and the output layer don't you? If not you might hit the linear separability problem, as some of your functions look paraboloid, I *think* these aren't linearly separable, and would require at least two tiers of processing neurons In your OP You defined the trainging outputs to be: 2 X Category I input 3 X Category II input (is this correct? the others all take two) 2 x Category II input. I stopped here. I then entered your test data into excel (red = Cat I, green = Cat II Blue = Cat III) and plotted their "function" graphs

            {0.04418 ,0.18135 ,0.28143 ,0.34367 ,0.14938}

            {0.37418 ,0.35050 ,0.18184 ,0.07493 ,0.01855}

            {0.04540 ,0.17061 ,0.24506 ,0.32560 ,0.21333}

            {0.08494 ,0.25653 ,0.26922 ,0.26141 ,0.12790}

            {0.18135 ,0.36832 ,0.24799 ,0.15743 ,0.04491}

            {0.11887 ,0.29143 ,0.27581 ,0.22870 ,0.08518}

            {0.02758 ,0.26239 ,0.37247 ,0.29998 ,0.03759}

            I suggest you do the same. AFAICT plots for 1 and 3 are near identical (despite being in different categories, as do plots 4 & 6), plots 1 & 2 are different (depsite being in the same category, plot 1 looks like the maximun of a quadratic, the other is sigmoid.), this is also the case for graphs 6 & 7. If your input data is correct, it is doubtful than at NN can train to these sets. The situation looks a little better if the expected outputs are changed to 2 X Category I input 2 X Category II input (corrected?) 2 x Category II input. In theory what you are doing is similar to OCR, but the graphs do not look consistent enough within a category, or different enough between categories for you approach to work. Hopefully I'm wrong though!!!!!!

            ragnaroknrol The Internet is For Porn[^]
            Pe

            K 1 Reply Last reply
            0
            • K Keith Barrow

              OK the first thing we can rule out tis training to one input repeatedly at the end, which is a pity as it's the easiest fix :-). I think I understand the problem domain now, You are trying to categorise a function given 5 input points. The first thing is, you do have more than one layer between the input layer and the output layer don't you? If not you might hit the linear separability problem, as some of your functions look paraboloid, I *think* these aren't linearly separable, and would require at least two tiers of processing neurons In your OP You defined the trainging outputs to be: 2 X Category I input 3 X Category II input (is this correct? the others all take two) 2 x Category II input. I stopped here. I then entered your test data into excel (red = Cat I, green = Cat II Blue = Cat III) and plotted their "function" graphs

              {0.04418 ,0.18135 ,0.28143 ,0.34367 ,0.14938}

              {0.37418 ,0.35050 ,0.18184 ,0.07493 ,0.01855}

              {0.04540 ,0.17061 ,0.24506 ,0.32560 ,0.21333}

              {0.08494 ,0.25653 ,0.26922 ,0.26141 ,0.12790}

              {0.18135 ,0.36832 ,0.24799 ,0.15743 ,0.04491}

              {0.11887 ,0.29143 ,0.27581 ,0.22870 ,0.08518}

              {0.02758 ,0.26239 ,0.37247 ,0.29998 ,0.03759}

              I suggest you do the same. AFAICT plots for 1 and 3 are near identical (despite being in different categories, as do plots 4 & 6), plots 1 & 2 are different (depsite being in the same category, plot 1 looks like the maximun of a quadratic, the other is sigmoid.), this is also the case for graphs 6 & 7. If your input data is correct, it is doubtful than at NN can train to these sets. The situation looks a little better if the expected outputs are changed to 2 X Category I input 2 X Category II input (corrected?) 2 x Category II input. In theory what you are doing is similar to OCR, but the graphs do not look consistent enough within a category, or different enough between categories for you approach to work. Hopefully I'm wrong though!!!!!!

              ragnaroknrol The Internet is For Porn[^]
              Pe

              K Offline
              K Offline
              Keith Barrow
              wrote on last edited by
              #8

              OP (Karayel) e-mailed me directly with the following > Hi , > > i have 5 layer and have 3 hiden layer in network > datas is not mine, i said that i took from my friends Lab. datas is measure from patients. > they can be wrong. > > i have 5 category in my data set > i am sending you my datas [edit removed the data set!] The fact that you have 3 hidden layers will solve the linear separability problem. Actually, this is the crux of your problem: "datas is measure[d] from patients. they can be wrong" Firstly, I replotted the first three graphs of category one. The graphs one an three looked very similar (a line going up hitting a maximum, then going down again)(which is good), but graph 2 looked sigmoid (it could have been the same graph but shifted earlier).The problem is this breaks the pattern, and I suspect this happens in all the categories, if so the NN is unlikely to be any help IMO. People have tried to spot predict patterns in stockmarket prices with limited success, I think the situation you have is similar. If the graphs are consistent (and you should plot samples from each category yourself to check) you may have a hope. If this is the case there are three things I'd Add 1) if this is a research project, is a NN really suitable in vivo? The system you are writing is inherently unprovable, and given this is a medical diagnostic tool (with potentially serious consequences), should it be trusted? 2) You are training lots of category 1 inputs, then lots of category 2,etc. If your NN anneals, it could well anneal to category 1 by doing it this way. You should cycle through one example of Cat I, Cat II Cat III etc while training, and repeat until the NN is as accurate as you need it. If you can develop a function to simulate expected data sets, that would be even better, as real data is unreliable, and trains unreliably. Ironically, it is the unreliable input that the NN sorts out, but I'd use generate training inputs. 3) Can you express what constitues category 1,2,3, etc formally in code? If so just coding it will be better.

              ragnaroknrol The Internet is For Porn[^]
              Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

              K 1 Reply Last reply
              0
              • K Keith Barrow

                OP (Karayel) e-mailed me directly with the following > Hi , > > i have 5 layer and have 3 hiden layer in network > datas is not mine, i said that i took from my friends Lab. datas is measure from patients. > they can be wrong. > > i have 5 category in my data set > i am sending you my datas [edit removed the data set!] The fact that you have 3 hidden layers will solve the linear separability problem. Actually, this is the crux of your problem: "datas is measure[d] from patients. they can be wrong" Firstly, I replotted the first three graphs of category one. The graphs one an three looked very similar (a line going up hitting a maximum, then going down again)(which is good), but graph 2 looked sigmoid (it could have been the same graph but shifted earlier).The problem is this breaks the pattern, and I suspect this happens in all the categories, if so the NN is unlikely to be any help IMO. People have tried to spot predict patterns in stockmarket prices with limited success, I think the situation you have is similar. If the graphs are consistent (and you should plot samples from each category yourself to check) you may have a hope. If this is the case there are three things I'd Add 1) if this is a research project, is a NN really suitable in vivo? The system you are writing is inherently unprovable, and given this is a medical diagnostic tool (with potentially serious consequences), should it be trusted? 2) You are training lots of category 1 inputs, then lots of category 2,etc. If your NN anneals, it could well anneal to category 1 by doing it this way. You should cycle through one example of Cat I, Cat II Cat III etc while training, and repeat until the NN is as accurate as you need it. If you can develop a function to simulate expected data sets, that would be even better, as real data is unreliable, and trains unreliably. Ironically, it is the unreliable input that the NN sorts out, but I'd use generate training inputs. 3) Can you express what constitues category 1,2,3, etc formally in code? If so just coding it will be better.

                ragnaroknrol The Internet is For Porn[^]
                Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

                K Offline
                K Offline
                karayel_kara
                wrote on last edited by
                #9

                Hi i am writing again. I want to test my code and is there any fucntion which are 5 inputs and 5 out? or square function example, 2 inputs x 2 outputs, 4 inputs x4 outputs. May be , you can recommend a function for my testing. in addition, i thank you so mouch for sharining your informations with me Best regards.

                K 1 Reply Last reply
                0
                • K karayel_kara

                  Hi i am writing again. I want to test my code and is there any fucntion which are 5 inputs and 5 out? or square function example, 2 inputs x 2 outputs, 4 inputs x4 outputs. May be , you can recommend a function for my testing. in addition, i thank you so mouch for sharining your informations with me Best regards.

                  K Offline
                  K Offline
                  Keith Barrow
                  wrote on last edited by
                  #10

                  If your are just testing the NN works, I'd lower the inputs and outputs to 1 neuron, then train to some functions (sin, x^2 ln etc), this gives you a base line to ensure all is correct. As for 5 inputs, 5 outputs, I'm not sure, the problem I applied my NN to was quite different. You could try a matrix transform of some sort, or categories 5 inputs. You might have better luck contacting a dedicated NN board (I'm sure I've seen them somewhere). I have only used a NN once(for my MSc Dissertation) and that was ~10 years ago so I'm pretty rusty. Every couple of years I get the urge go back to it, but other things are shinier (and /or more remunerative :-). I had to hand-check my NN there was a term in the momentum calculation (actually a "-" instead of a "+"!) that brought the NN close to the desired result, then would diverge again. That took two weeks (c++ on linux, nedit, but no real IDE). Anyway, best of luck.

                  ragnaroknrol The Internet is For Porn[^]
                  Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

                  K 1 Reply Last reply
                  0
                  • K Keith Barrow

                    If your are just testing the NN works, I'd lower the inputs and outputs to 1 neuron, then train to some functions (sin, x^2 ln etc), this gives you a base line to ensure all is correct. As for 5 inputs, 5 outputs, I'm not sure, the problem I applied my NN to was quite different. You could try a matrix transform of some sort, or categories 5 inputs. You might have better luck contacting a dedicated NN board (I'm sure I've seen them somewhere). I have only used a NN once(for my MSc Dissertation) and that was ~10 years ago so I'm pretty rusty. Every couple of years I get the urge go back to it, but other things are shinier (and /or more remunerative :-). I had to hand-check my NN there was a term in the momentum calculation (actually a "-" instead of a "+"!) that brought the NN close to the desired result, then would diverge again. That took two weeks (c++ on linux, nedit, but no real IDE). Anyway, best of luck.

                    ragnaroknrol The Internet is For Porn[^]
                    Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

                    K Offline
                    K Offline
                    karayel_kara
                    wrote on last edited by
                    #11

                    :) tanks you too, i used meumentum coefficient. May be you will begin study NN again? can you see your code ? and in there, can you write main block in your code? best.

                    K 1 Reply Last reply
                    0
                    • K karayel_kara

                      :) tanks you too, i used meumentum coefficient. May be you will begin study NN again? can you see your code ? and in there, can you write main block in your code? best.

                      K Offline
                      K Offline
                      Keith Barrow
                      wrote on last edited by
                      #12

                      karayel_kara wrote:

                      I used meumentum coefficient.

                      It's a good thing to use, I tested the property my NN without the coefficient, and learning was much slower, I just had a bug in mine!

                      karayel_kara wrote:

                      May be you will begin study NN again?

                      Not likely sadly, not much call for it in the business world and most of what I do is related to business.

                      karayel_kara wrote:

                      can you see your code ? and in there, can you write main block in your code?

                      If memory serves ~2000+ lines of [badly written, I had only just started developing!] c++, and I only have a hardcopy here, the CD is back in the UK. There are neural network frameworks available on CP. This one is really good: http://www.codeproject.com/KB/recipes/aforge_neuro.aspx[^]

                      ragnaroknrol The Internet is For Porn[^]
                      Pete o'Hanlon: If it wasn't insulting tools, I'd say you were dumber than a bag of spanners.

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups