"UserWarning: An input could not be retrieved. It could be because a worker has died. We do not have any information on the lost sample."
Solution 1
This is just a user warning that will be usually thrown when you try to fetch the inputs,targets during training. This is because a timeout is set for the queuing mechanism which will be specified inside the data_utils.py
.
For more details you can refer the data_utils.py
file which will be inside the keras/utils
folder.
https://github.com/keras-team/keras/blob/master/keras/utils/data_utils.py
Solution 2
If you are running the training in GPU, the Warning
will occur. You have to know that there are two running progress during the fit_generator
running.
- GPU,
trains
the IMAGE DATASETS with each steps in each epoch. - CPU,
prepares
the IMAGE DATASETS with each batch size.
While, they are parallel tasks. So if CPU's compute is lower than GPUs', the Warning
occurs.
Solution:
Just set your batch_size smaller or upgrade your CPU config.
Solution 3
I got the same warning when training a model in Google Colab. The problem was that I tried to fetch the data from my Google Drive that I had mounted to the Colab session. The solution was to move the data into Colab's working directory and use it from there. This can be done simply via !cp -r path/to/google_drive_data_dir/ path/to/colab_data_dir
in the notebook. Note that you will have to do this each time when a new Colab session is created.
This may or may not be the problem that Rahul was asking, but I think this might be helpful to others who face the issue.
Rahul Anand
I am a Deep Learning Engineer for the last 3 years, working on the ADAS(Advanced Driver Assistance Systems) project for one of the most reputed US clients in TATA Consultancy Services. I like to know more about Artificial Intelligence especially deep learning algorithms and their implementations.
Updated on June 14, 2022Comments
-
Rahul Anand almost 2 years
While training model I got this warning "UserWarning: An input could not be retrieved. It could be because a worker has died.We do not have any information on the lost sample.)", after showing this warning, model starts training. What does this warning means? Is it something that will affect my training and I need to worry about?
-
Fernand about 4 yearsMay we know why reducing the number of workers and max_queue_size will solve the problem?
-
Anshuman Kumar about 4 yearsI am using my Google Drive as a storage. Where else would I put this? Colab uses Google Drive as a hard disk right?
-
Benchur Wong almost 4 yearsCould you explain more about 'path/to/colab_data_dir'
-
Benchur Wong almost 4 yearsSo, Could you explain more about it. The same error I occured.
-
mjkvaak almost 4 yearsSorry, I thought that I had answered the first question already. AFAIK opening a Google Colab session spins up an virtual machine to which you can mount your Google Drive. However, the mount is not a physical one (fast) but the files need to be transferred over internet (slow). It's this file transfer that will cause a bottleneck. To avoid this, it's best copy the files from Drive physically to Colab session's drive (any folder you prefer) after which you can use them faster.