Using neuralnet with caret train and adjusting the parameters

11,207

Solution 1

train sets hidden for you (based on the values given by layer-layer3. You are trying to specify that argument twice, hence:

formal argument "hidden" matched by multiple actual arguments

HTH,

Max

Solution 2

I think for beginners it's not obvious at all that the layer specification cannot be passed directly into the train function.

One must read the documentation very carefully to understand the following passage for ...: Errors will occur if values for tuning parameters are passed here.

So first, you must realize that the hidden parameter of the neuralnet::neuralnet is defined as a tuning parameter and therefore may not be passed directly to the train function (by ...). You find the tuning parameter definitions by:

getModelInfo("neuralnet")$neuralnet$parameters
  parameter   class                    label
1    layer1 numeric #Hidden Units in Layer 1
2    layer2 numeric #Hidden Units in Layer 2
3    layer3 numeric #Hidden Units in Layer 3

Instead, you must pass the hidden layer definition by the tuneGrid parameter - not obvious at all because that is normally reserved for tuning the parameters, not passing them.

So you can define the hidden layers as follows:

tune.grid.neuralnet <- expand.grid(
  layer1 = 10,
  layer2 = 10,
  layer3 = 10
)

and then pass that to the caret::train function call as:

  model.neuralnet.caret <- caret::train(
    formula.nps,
    data = training.set,
    method = "neuralnet",
    linear.output = TRUE, 
    tuneGrid = tune.grid.neuralnet, # cannot pass parameter hidden directly!!
    metric = "RMSE",
    trControl = trainControl(method = "none", seeds = seed)
Share:
11,207
user2062207
Author by

user2062207

Updated on June 09, 2022

Comments

  • user2062207
    user2062207 almost 2 years

    So I've read a paper that had used neural networks to model out a dataset which is similar to a dataset I'm currently using. I have 160 descriptor variables that I want to model out for 160 cases (regression modelling). The paper I read used the following parameters:-

    'For each split, a model was developed for each of the 10 individual train-test folds. A three layer back-propagation net with 33 input neurons and 16 hidden neurons was used with online weight updates, 0.25 learning rate, and 0.9 momentum. For each fold, learning was conducted from a total of 50 different random initial weight starting points and the network was allowed to iterate through learning epochs until the mean absolute error (MAE) for the validation set reached a minimum. '

    Now they used a specialist software called Emergent in order to do this, which is a very specialised neuronal network model software. However, as I've done previous models before in R, I have to keep to it. So I'm using the caret train function in order to do 10 cross fold validation, 10 times with the neuralnet package. I did the following:-

    cadets.nn <- train(RT..seconds.~., data = cadet, method = "neuralnet", algorithm = 'backprop', learningrate = 0.25, hidden = 3, trControl = ctrl, linout = TRUE)
    

    I did this to try and tune the parameters as closely to the ones used in the paper, however I get the following error message:-

      layer1 layer2 layer3 RMSE Rsquared RMSESD RsquaredSD
    1      1      0      0  NaN      NaN     NA         NA
    2      3      0      0  NaN      NaN     NA         NA
    3      5      0      0  NaN      NaN     NA         NA
    Error in train.default(x, y, weights = w, ...) : 
      final tuning parameters could not be determined
    In addition: There were 50 or more warnings (use warnings() to see the first 50)
    

    Do you know what I'm doing wrong? It works when I do nnet, but I can't tune the parameters for that to make it similar to the ones used in the paper I'm trying to mimic.

    This is what I get in the warnings() fifty times:-

    1: In eval(expr, envir, enclos) :
      model fit failed for Fold01.Rep01: layer1=1, layer2=0, layer3=0 Error in neuralnet(form, data = data, hidden = nodes, ...) : 
      formal argument "hidden" matched by multiple actual arguments
    
    2: In data.frame(..., check.names = FALSE) :
      row names were found from a short variable and have been discarded
    3: In eval(expr, envir, enclos) :
      model fit failed for Fold01.Rep01: layer1=3, layer2=0, layer3=0 Error in neuralnet(form, data = data, hidden = nodes, ...) : 
      formal argument "hidden" matched by multiple actual arguments
    
    4: In data.frame(..., check.names = FALSE) :
      row names were found from a short variable and have been discarded
    5: In eval(expr, envir, enclos) :
      model fit failed for Fold01.Rep01: layer1=5, layer2=0, layer3=0 Error in neuralnet(form, data = data, hidden = nodes, ...) : 
      formal argument "hidden" matched by multiple actual arguments
    

    Thanks!

    • pangia
      pangia about 10 years
      What is in warnings()?
    • user2062207
      user2062207 about 10 years
      '1: In eval(expr, envir, enclos) : model fit failed for Fold01.Rep01: layer1=1, layer2=0, layer3=0 Error in neuralnet(form, data = data, hidden = nodes, ...) : formal argument "hidden" matched by multiple actual arguments 2: In data.frame(..., check.names = FALSE) : row names were found from a short variable and have been discarded 3: In eval(expr, envir, enclos) : model fit failed for Fold01.Rep01: layer1=3, layer2=0, layer3=0 Error in neuralnet(form, data = data, hidden = nodes, ...) : formal argument "hidden" matched by multiple actual arguments ' I get this fifty times
  • antecessor
    antecessor almost 4 years
    So I get that here, you are training the neural network with three hidden layers, each composed of 10 neurons, isn't it?
  • Agile Bean
    Agile Bean almost 4 years
    yes. the neuralnet library allows to specify only three layers
  • elmato
    elmato over 2 years
    This is super helpful, not only for OP's problem but it provides insight into how find similar info for other models. Thanks @AgileBean