A common task in image analysis programming is to display an image, most probably stored using NumPy ndarray
np.uint8(height, width, 3) using a QImage and a QPixmap. The user can then modify the image interactively and transforms the QImage back in a ndarray to further processing.
These tasks can be straightforward in many cases, but some edge cases can trigger random errors that are sometimes difficult to debug using solutions commonly found online.
If we look at the Qt documentation, we see that there are two constructors to instantiate a QImage. One assumes that the data buffer is 32-bit aligned, and the other takes a bytesPerLine argument that specifies the number of bytes per line (stride).
The most general solution is to use the second solution:
def setImage(array): # Array uint8 3 channels RGB pixmap = QPixmap() pixmap.convertFromImage(QImage( array.data, array.shape, array.shape, array.strides, QImage.Format_RGB888)) return pixmap
Taking a QImage displayed and/or modified using QPixmap and QPainter to a NumPy ndarray can lead to some difficulties.
For example, if the original data were not 32-bit aligned, the QImage extracted from the modified QPixmap will now be 32-bit aligned. The buffer will be bigger than the original data, and using the most common solution found on the internet
np.frombuffer(data).reshape(w,h,3) will fail.
A general solution that doesn’t assume anything about the image is to use the information given by the QImage:
def getImage(pixmap): qimage = pixmap.convertToFormat(QImage.Format_RGB888) array = np.ndarray((qimage.height(), qimage.width(), 3), buffer=qimage.constBits(), strides=[qimage.bytesPerLine(), 3, 1], dtype=np.uint8) return array