How can I avoid negative values in the output of a feedforward net?

3 views (last 30 days)
Hello
I'm creating a neural feedforward net to predict hourly values of solar radiation. However, in some hours "at night" where it is supposed to generate a value of 0, a negative number is presented instead.
Output is a vector of 15 elements (one value each hour of the day) that vary from 0 to 1400.
Here is my code:
inputs = tonndata(xlsread('datosJP','inirr3'),false,false);
targets = tonndata(xlsread('datosJP','targirr3'),false,false);
net = feedforwardnet([12,8],'trainlm');
net.trainParam.lr = 0.05;
net.trainParam.mc = 0.1;
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 90/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 0/100;
net.performFcn = 'mse';
net = configure(net,inputs,targets);
a = 20*rand(12,size(x,2))-10;
net.IW{1} = a;
net = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs);

Accepted Answer

Greg Heath
Greg Heath on 14 Oct 2014
Edited: Greg Heath on 14 Oct 2014
1. If your targets are bounded for a physical or mathematical reason the transfer functions logsig or tansig can be used. For documentation use the help and doc commands. For example,
help logsig
doc logsig
2. What sizes are your input and target matrices?
[ I N ] = size(inputs)
[ O N ] = size(outputs)
3. If predictions are correlated to past inputs and predictions, the best model could be a time-series network.
help narxnet
doc narxnet
4. If you stay with the static model, use the regression function FITNET which calls FEEDFORWARDNET but includes the helpful regression plot. If you want to know the default values just type, WITHOUT SEMICOLON
net = fitnet
5. It is confusing when you assign default values to parameters. For examples look at the code associated with help and doc examples.
The only parameters that I tend to change are net.divideFcn, net.trainParam.goal and net.trainParam.min_grad. Increasing the latter two can reduce training time without introducing significant errors.
6. Any good real world design is going to require looking at tens or hundreds of candidates. My policy is to first use defaults, then change parameters to improve performance.
Except for the three I mentioned above, it usually just comes down to finding out the minimum number of hidden nodes that are sufficient and designing multiple candidates that only differ by initial random weights.
Hope this helps
Thank you for formally accepting my answer
Greg
  2 Comments
Juan
Juan on 14 Oct 2014
Thank you Greg for answering so quickly.
1) I changed the transfer function for the two hidden layers to a sigmoid curve and the performance has improved. However, since the last function is a "purelin", there are still some negative values (although less than before). I cannot change this last function to a sigmoid since that would limitate my outputs to a range from 0 to 1. Is there another transfer function that is similar to the "purelin" but that it does not give negative values?
2) As input I have three elements with 365 timesteps (365x3) and as an output, 15 elements with 365 timesteps as well (365x15).
3) The network is for predicting solar radiation. As an input I did not used past values since it was too many data and it wasn't giving good results. Inputs are meteorological parameters for the next day.
4) Ok I will see what happens if I use the fitnet instead of the feedforward net. I'm not clear, however, about the difference between the two of them. Also, do you think that a radial basis function would also be helpful?
5) Do you mean the initial weights? I found out that if I define previously those values between -10 and +10 the performance of the network improved. That was from an heuristic test.
6) Thank you for the advice. It worked but some output values are still negative.
Greg Heath
Greg Heath on 17 Oct 2014
Edited: Greg Heath on 17 Oct 2014
Only 1 hidden layer is sufficient input-hidden-output .
fitnet only differs from feedforward net by a single output plot. However, I keep my problems straight by only using fitnet for regression and patternnet for classification. Feedforward never has to be used.
Use 'tansig' for hidden and 'logsig' for output. The default reverse normalization will yield the correct answer.
The default weight initialization is supposed to be optimal

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!