Handling large datasets with HDF5

In this tutorial, we will focus on how to handle large dataset with HDF5 in Python.

HDF5 helps to store and manipulate large amount of numerical data. Let’s have a look at it’s implementation in Python.

Handle large dataset with HDF5 in Python

Installation

The installation process is quiet easy. You just need to enter the following command in the terminal –

pip install h5py

Implementation of HDF5 in Python

Suppose we have a dataset of shape (1M X 608 X 608 X 3), M stands for Million. It will be very hard to store this array in the temporary memory. So we use HDF5 to save these large size array directly into permanent memory.

import h5py
import numpy as np

sample_data = np.random.rand((1000000, 608, 608, 3)) #

## First create a file named "Random_numbers.h5" and
# open in write mode to write the content
##
with h5py.File('Random_numbers.h5', 'w') as f:
    f.create_dataset("dataset1",  data=sample_data)

## While reading open the file in read mode
with h5py.File('Random_numbers.h5', 'r') as f:
    # note that while retrieving the data,
    # you need to know the name of dataset.
    # In this case name of dataset is "dataset1"
    retrieved_data = f['dataset1'][:]
print('First element : ',retrieved_data[0])

Output :

[[[0.35563185 0.59547217 0.36053888]
  [0.02885046 0.96066682 0.28690845]
  [0.14800811 0.43085678 0.36970245]
  ...
  [0.07856159 0.23505179 0.25959175]
  [0.03970569 0.29016038 0.02641811]
  [0.84843547 0.40077632 0.05561672]]
 [[0.26559201 0.97359299 0.15236374]
  [0.66110068 0.92589471 0.50381032]
  [0.67741899 0.87019003 0.35466544]
  ...
  [0.18063835 0.85328907 0.16305181]
  [0.00976526 0.96994848 0.32510741]
  [0.7354476  0.92539469 0.43366281]]
 [[0.62785975 0.2334664  0.40840852]
  [0.87239311 0.31018004 0.83194718]
  [0.06959059 0.566415   0.88275353]
  ...
  [0.38180437 0.83597031 0.90776347]
  [0.08881869 0.51908317 0.72260596]
  [0.61523464 0.37387392 0.68331717]]
 ...
 [[0.02565655 0.05230098 0.12934373]
  [0.2526348  0.78718671 0.18574177]
  [0.45377266 0.22270581 0.48228926]
  ...
  [0.54901118 0.60905905 0.72770906]
  [0.32967195 0.267488   0.22111121]
  [0.20621961 0.8038491  0.36280409]]
 [[0.67120235 0.15871154 0.25751828]
  [0.28025864 0.53307689 0.65182508]
  [0.40939795 0.30761584 0.6463194 ]
  ...
  [0.56512693 0.92060315 0.94590441]
  [0.47803765 0.56483168 0.86713432]
  [0.25376744 0.72887775 0.86382826]]
 [[0.71732982 0.5036685  0.36422589]
  [0.03374496 0.71250429 0.9230377 ]
  [0.63542672 0.81995507 0.44128048]
  ...
  [0.18921904 0.02865259 0.43014785]
  [0.54269269 0.35759151 0.78040305]
  [0.16538634 0.6913133  0.60181118]]]

You will get output something like this but may not be the same. h5py specially helps in handling large size data such as array of images, or databases. While you can read my post – Prepare your own data set for image classification in Python, about converting images to processable Numpy array, I would suggest you to implement h5py and store the large dataset of array of images. Read the documentation –  HDF5 for Python.

Thank You for reading this post. I hope you understand how to use h5py to handle large size data in RAM. If you have any query, refer to the documentation or comment below, I will be more happy to help you.

One response to “Handling large datasets with HDF5”

  1. chelsea says:

    Can h5py be used for handling big data? i want to create a data analysis tool in python that processes large data fast? Would you say h5py is suitable for that or what would you suggest me to use?

Leave a Reply

Your email address will not be published. Required fields are marked *