In the following we show some examples on how to create Datasets in HDF5 (whith h5py) and update values
### First dataset
%% Cell type:code id: tags:
``` python
importh5py
fromtimeitimporttimeit# To measure execution time
importnumpyasnp# this is the main python numerical library
f=h5py.File("testdata.hdf5",'w')
# We create a test 2-d array filled with 1s and with 10 rows and 6 columns
data=np.ones((10,6))
f["dataset_one"]=data
# We now retrieve the dataset from file (it is still in memory in fact)
dset=f["dataset_one"]
```
%% Output
<frozen importlib._bootstrap>:491: RuntimeWarning: The global interpreter lock (GIL) has been enabled to load module 'h5py._errors', which has not declared that it can run safely without the GIL. To override this behavior and keep the GIL disabled (at your own risk), run with PYTHON_GIL=0 or -Xgil=0.
%% Cell type:code id: tags:
``` python
#The following instructions show some dataset metadata
print(dset)
print(dset.dtype)
print(dset.shape)
```
%% Output
<HDF5 dataset "dataset_one": shape (10, 6), type "<f8">
float64
(10, 6)
%% Cell type:markdown id: tags:
### Dataset slicing
Datasets provide analogous slicing operations as numpy arrays (with h5py). But these selections are translated by h5py to portion of the dataset and then HDF5 reads the data form "disk". Slicing into a dataset object returns a NumpPy array.
%% Cell type:code id: tags:
``` python
# The ellipses means "as many ':' as needed"
# here we use it to get a numpy array of the
# entire dataset
out=dset[...]
print(out)
type(out)
```
%% Output
[[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]]
numpy.ndarray
%% Cell type:code id: tags:
``` python
dset[1:5,1]=0.0
dset[...]
#but we cannot use negative steps with a dataset
try:
dset[0,::-1]
except:
print('No no no!')
```
%% Output
No no no!
%% Cell type:code id: tags:
``` python
# random 2d distribution in the range (-1,1)
data=np.random.rand(15,10)*2-1
dset=f.create_dataset('random',data=data)
# print the first 5 even rows and the first two columns
out=dset[0:10:2,:2]
print(out)
# clipping to zero all negative values, using boolean indexing
dset[data<0]=0
```
%% Output
[[ 0.76179305 -0.20226613]
[-0.44336689 0.76110537]
[-0.53034403 0.96049982]
[-0.73538148 -0.20414973]
[ 0.79388089 0.10900864]]
%% Cell type:markdown id: tags:
### Resizable datasets
If we don't know in advance the dataset size and we need to append new data several times, we have to create a resizable dataset, then we have to append data in a scalable manner
We can directly create nested groups with a single instruction. For instance to create the group 'nisp_frame', then the subgroup 'detectors' and at last its child group 'det11', we can use the instruction below.
print(grp)# prints some group information. It has one member, the dataset
```
%% Output
/nisp_frame/detectors/det11
<HDF5 group "/nisp_frame/detectors" (1 members)>
<HDF5 file "nisp_frame.hdf5" (mode r+)>
<HDF5 group "/nisp_frame/detectors/det11" (1 members)>
%% Cell type:markdown id: tags:
## Attributes
Attributes can be defined inside a group or in a dataset. Both have the **.attrs** property to access an attribute or define new attributes. With h5py, the attribute type is inferred from the passed value, but it is also possible to explicitly assign a type.
In the following instruction we create a reference from the detector 11 scientific image to the corresponding star catalog, which is stored in the same file