next up previous
Next: Requirements Up: xmemory module Previous: xmemory module

Introduction

The main issue to face when programming image processing applications is memory handling. As far back as one can look, images have always been much bigger than the affordable amount of memory space. Moore's law ensures that computer performances get an exponential increase but detectors and images follow the same law, yielding an almost constant ratio over the past 40 years. Astronomical image processing faces the same challenges with detector sizes producing gigabyte-sized images by large numbers. People facing the difficult task of processing large amounts of frames (e.g. video editing) are also in need of efficient memory handling schemes.

Since the ratio image size over affordable memory space keeps constant, it is probably worth spending the necessary time to find out a scalable solution that is generic enough to survive the evolutions of technology without having to rewrite large amounts of code too often.

The only way to fit images into the processing space (be it RAM or swap space or wherever the pixels are stored for processing) is to cut down the images into manageable chunks (or tiles). This splitting can be done either by the programmer and included within image processing algorithms, or it can be done invisibly by use of virtual memory space, allowing the programmer to design algorithms believing that all pixels are present in memory at the same moment.

The second kind of method is implemented in the xmemory module to respond to a number of constraints. The following document describes its design and implementation.


next up previous
Next: Requirements Up: xmemory module Previous: xmemory module
Nicolas Devillard 2002-05-03