rss
 
comment(s)

archives
J|F|M|A|M|J|J|A|S|O|N|D
(20##) 10 9 8 7 6 5 4 3 2 1 0 <
 
DesktopWeb FormText   neuralized lessonsFri, 24 Sep 2004 23:30:14 GMT # 

aka lessons learned. mucked around with the neural networks again today, by adding some error feedback mechanisms. basically, this lets me judge the total error in the system as it is being run, but it is most helpful in training.

there seems to be multiple levels of training that you can attain: first, is to just train it until it properly matches all the inputs to the correct output. second, is to achieve the same goal, but to feed the inputs in a random order. this makes sure that it trains evenly across the inputs. third, is to achieve the same goal, but to keep running it to reduce the error. this seemed to help it handle input with noise better. granted, there is a point at which you can overtrain and make it too specialized, thus making it worse at handling noise.

what else ... training sucks. you waste a bunch of time waiting for it to train. highly recommend using an NN that can de/serialize itself once it has been trained; although i've been tweaking it a bunch of different ways and retraining to see if a certain setting works better. that part also sucks ... tweaking the NN. some examples are: how you format the input. the range for the random weights that you initialize it. what transfer function you use for activation. if you use bias. what learning rate and momentum you use. if you use a different learning rate for the hidden nodes for the output nodes. how long you train it. what minimum error is acceptable. that is a lot of different things you can muck up. i have not done it, but am considering making a 'training harness' that i could use to let it try out all these different permutations, and keep score of which setup did best. then i could just let it run overnight until it came up with the best fit. sort of procedural genetic programming if you will