GPU Usage

pomegranate has GPU accelerated matrix multiplications to speed up all operations involving multivariate Gaussian distributions and all models that use them. This has led to an approximately 4x speedup for multivariate Gaussian mixture models and HMMs compared to using BLAS only. This speedup seems to scale better with dimensionality, with higher dimensional models seeing a larger speedup than smaller dimensional ones.

By default, pomegranate will activate GPU acceleration if it can import cupy, otherwise it will default to BLAS. You can check whether pomegranate is using GPU acceleration with this built-in function:

import pomegranate
print(pomegranate.utils._is_gpu_enabled())

If you’d like to deactivate GPU acceleration you can use the following command:

pomegranate.utils.disable_gpu()

Likewise, if you’d like to activate GPU acceleration you can use the following command:

pomegranate.utils.enable_gpu()

FAQ

  1. Why cupy and not Theano?
  1. pomegranate only needs to do matrix multiplications using a GPU. While Theano supports an impressive range of more complex operations, it did not have a simple interface to support a matrix-matrix multiplication in the same manner that cupy does.
  1. Why am I not seeing a large speedup with my GPU?
  1. There is a cost to transferring data to and from a GPU. It is possible that the GPU isn’t fast enough, or that there isn’t enough data to utilize the massively parallel aspect of a GPU for your dataset.
  1. Does pomegranate work using my type of GPU?
  1. The supported GPUs will be better documented on the cupy package.
  1. Is multi-GPU supported?
  1. Currently, no. In theory it should be possible, though.