caffe Prepare Data for Training Prepare arbitrary data in HDF5 format

Help us to keep this website almost Ad Free! It takes only 10 seconds of your time:
> Step 1: Go view our video on YouTube: EF Core Bulk Extensions
> Step 2: And Like the video. BONUS: You can also share it!

Example

In addition to image classification datasets, Caffe also have "HDF5Data" layer for arbitrary inputs. This layer requires all training/validation data to be stored in format files.
This example shows how to use python h5py module to construct such hdf5 file and how to setup caffe "HDF5Data" layer to read that file.

Build the hdf5 binary file

Assuming you have a text file 'train.txt' with each line with an image file name and a single floating point number to be used as regression target.

import h5py, os
import caffe
import numpy as np

SIZE = 224 # fixed size to all images
with open( 'train.txt', 'r' ) as T :
    lines = T.readlines()
# If you do not have enough memory split data into
# multiple batches and generate multiple separate h5 files
X = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' ) 
y = np.zeros( (1,len(lines)), dtype='f4' )
for i,l in enumerate(lines):
    sp = l.split(' ')
    img = caffe.io.load_image( sp[0] )
    img = caffe.io.resize( img, (SIZE, SIZE, 3) ) # resize to fixed size
    # you may apply other input transformations here...
    # Note that the transformation should take img from size-by-size-by-3 and transpose it to 3-by-size-by-size
    X[i] = img
    y[i] = float(sp[1])
with h5py.File('train.h5','w') as H:
    H.create_dataset( 'X', data=X ) # note the name X given to the dataset!
    H.create_dataset( 'y', data=y ) # note the name y given to the dataset!
with open('train_h5_list.txt','w') as L:
    L.write( 'train.h5' ) # list all h5 files you are going to use

Configuring "HDF5Data" layer

Once you have all h5 files and the corresponding test files listing them you can add an HDF5 input layer to your train_val.prototxt:

 layer {
   type: "HDF5Data"
   top: "X" # same name as given in create_dataset!
   top: "y"
   hdf5_data_param {
     source: "train_h5_list.txt" # do not give the h5 files directly, but the list.
     batch_size: 32
   }
   include { phase:TRAIN }
 }

You can find more information here and here.


As shown in above, we pass into Caffe a list of HDF5 files. That is because in the current version there's a size limit of 2GB for a single HDF5 data file. So if the training data exceeds 2GB, we'll need to split it into separate files.

If a single HDF5 data file exceeds 2GB we'll get an error message like

Check failed: shape[i] <= 2147483647 / count_ (100 vs. 71) blob size exceeds INT_MAX

If the total amount of data is less than 2GB, shall we split the data into separate files or not?

According to a piece of comment in Caffe's source code, a single file would be better,

If shuffle == true, the ordering of the HDF5 files is shuffled, and the ordering of data within any given HDF5 file is shuffled, but data between different files are not interleaved.



Got any caffe Question?