Dreaming of an image loading/saving/manipulation library

For decades I was using FreeImage as a “backend” images loading/saving library which I wrapped into my own Bitmap class providing additional image manipulation functionality on top: convert colors, blit, resize, convolve, move channels around, cut & paste, and so on and so forth. The Bitmap class has been evolving for almost 15 years to become what it is now, a true abomination combining different coding styles and looking like a patched stovepipe with numerous ugly fixes and hacks. While it is still doing its job, it is becoming more and more difficult to support and maintain it. A full-featured clean image manipulation library would be really nice to have. Here is my features wish list for it.

  • Support 1D, 2D, 3D and Cube images. I’m doing graphics and using this class as a staging ground to upload textures to GPU. 1D image can be a particular case of 2D image with the height of 1, 2D image can be a case of 3D image with the depth of 1, and Cube image can be a 3D image with the depth equal to 6. Should be able to convert equirectangular projection to a cube image and back. Should be able to convert vertical/horizontal cross image to a cube image and back. Should be able to extract a 2D slice from a 3D image (and write it back as well).
  • Support different pixel formats. 8/16/32-bit unsigned/signed integer. 16/32-bit floating point. Arbitrary number of channels, R/RG/RGB/RGBA. Basic image operations like GetPixel, SetPixel, Resize, Flip, etc should work with all these basic formats. GetPixel should support bilinear and trilinear (for 3D textures) filtering. Cube image lookup should support seamless mode.
  • Some notion of channels semantics. All flavours of RGB-BGR, RGBA-BGRA, ARGB-RGBA, etc conversions. Different color spaces (CMYK, Lab) desirable but are not a show stopper.
  • Conversion from sRGB to linear color and back.
  • Support of some esoteric formats to store compressed image data and upload it later to GPU. ETC2/ETC2_EAC is a minimum here. Basic image operations can skip all work with these formats. That’s it, just storage.
  • Blending operations. Normal, Lighten, Darken, Multiply, Average, Add, Subtract, Difference, Negation, Screen, Exclusion, SoftLight, HardLight, VividLight, PinLight, LinearLight, HardMix, ColodDodge, ColorBurn, LinearDodge, LinearBurn, etc.
  • The library should not be too pessimized on performance if compiled with -O0 optimization (many C++14/17/20 libraries have this). Should be able to run the code in debug builds reasonably well.
  • Add/remove scanline stride.
  • Load and save different image formats. JPEG, PNG, HDR, KTX, RAW (uncompressed pixels, mostly for 3D textures) loading and saving are vital necessary. EXR loading is necessary as well, however, saving is optional.
  • Load and save from/to files and memory.
  • Move channels around. “Take the 2nd channel from this image and put it into the 1st channel of this one”.
  • Some rudimentary drawing operations: ClearColor, FillBox, MakeXorPattern, line drawing, etc.
  • Store mipmap levels (just having a NextImage pointer will do). Treat mipmaps well while loading and saving image formats which support them (KTX).
  • Calculate normal maps from heigh maps and height maps from normal maps.
  • Should have configurable dependencies. All supported image formats should be optional and configurable at will.
  • Support multiple image loading/saving backends switchable at compile time. For example, libpng/libjpeg vs stb_image.
  • Simple to compile. Two files would be ideal: .cpp and .hpp
  • Anyone?

Visual Studio 2015 and lambda parameters

Was writing this code in Visual Studio 2015. Note the ‘auto’ specifier in the lambda parameter list.

template <typename Container, typename Entity>
void Remove( Container& c, Entity e )
{
	auto Iter = std::remove_if(
		c.begin(), c.end(),
		[ e ]( const auto& Ent ){ return Ent == e; }
	);
	c.erase( Iter, c.end() );
}

Ended up with this code to stay compatible with Visual Studio 2013.

template <typename Container, typename Entity>
void Remove( Container& c, Entity e )
{
	auto Iter = std::remove_if(
		c.begin(), c.end(),
		[ e ]( const typename Container::value_type& Ent ) { return Ent == e; }
	);
	c.erase( Iter, c.end() );
}

Dual panel orthodox portable file manager: Wal Commander GitHub Edition

As a zealous Windows fan and developer, I use Far Manager to do all file managing and significant amount of coding. When it comes to Linux or OS X, I feel like a fish dragged ashore. That is it: no familiar UI and hotkeys to dig through my files and type my code. It is painful because I need to switch between Windows/Linux/OSX and cannot live on a single system. Somewhat over a year ago, I found a portable open-source file manager called Wal Commander, which was pretty close to what Far Manager is on Windows. I started using it and found some nice features missing, which I was able to add. The original author was very slow on development and integration of patches, so I decided to put up a GitHub repository and stockpile all my contributions there.

Since then, several people contributed to the project to improve it and make it usable on FreeBSD. The project has evolved too far from the original source code (working FreeBSD port, tons of new features, step-by-step refactoring and code improvement) and I renamed it to Wal Commander GitHub Edition.

Wal Commander GitHub Edition

Here is a complete list of new features in the current development branch (master-0.16.2):

  • Added: share search text between all search and search-n-replace dialogs
  • Added: indicator of the current sorting order in the panel
  • Added: taskbar hints for the Alt key
  • Added: option ‘Select folders’
  • Added: F3 calculates the size of multiple selected folders
  • Added: command line autocomplete
  • Added: save window size & position
  • Added: save path in panels
  • Hotket: Shift+Enter to start command without the terminal
  • Hotkey: Ctrl+Tab to switch between editor/viewer and panels
  • Hotkey: Ctrl+O in editor/viewer shows the terminal terminal
  • Hotkey: ‘5’ on the numpad calculates the size of the selected files
  • Hotkey: Ctrl+U swap panels
  • Hotkey: Ctrl+Del in the editor
  • Hotkey: Ctrl+BackSpace in the editor
  • Hotkey: Ctrl+[ and Ctrl+] paste a path from the left/right panel to the command line
  • Hotkey: Ctrl+Enter in Alt-search moved to the next file
  • Hotkey: Ctrl+G applies command to the selected files
  • Hotkey: Ctrl+Insert copies the current file name to the clipboard
  • Hotkey: Ctrl+Insert copies contents of the non-empty command line to the clipboard
  • Hotkey: Ctrl+Enter on .. pastes the current path into the command line
  • Hotkey: ESC clears the command line before toggling the panels
  • Hotkey: ESC cancels Alt+… panel search before toggling the panels
  • Hotkey: DEL deletes files
  • Hotkey: Ctrl+PgDown to enter directories
  • Hotkey: BackSpace to go up dir (configurable, off by default)
  • Improved: Alt-search editline repaints correctly
  • Improved: mouse wheel support
  • Improved: FreeBSD support
  • Improved: editor cursor is now Far-like and blinking
  • Improved: save per-file scrolling positions in the editor
  • Improved: inaccessible paths bring you to the root folder or to the drive selection dialog
  • Improved: panel footer text (less garbage, show free space)
  • Improved: show other panel’s full path in the drive selection dialog
  • Fixed: vertical lines in Full panel modes
  • Fixed: command line is now black
  • Fixed: numpad Enter
  • Fixed: cursor position in the panel after deleting files
  • Fixed: switching language while in Alt-search closes the search editline
  • Fixed: bug with the first letter in Alt-search
  • Fixed: save/restore panel paths on Windows
  • Fixed: save the configuration on exit
  • Fixed: don’t cancel Alt+… panel search on Shift, Alt, and Ctrl
  • Added CMake and Code::Blocks configs
  • Disabled most of the 3D UI
  • Disabled auto indent in the editor
  • Default font is LiberationMono-Regular 14pt
  • Default tab size is 3
  • Case sensitive sorting is disabled by default
  • Fast path selection on the other panel

Wal Commanger GitHub Edition - Autocomplete

The project is growing and looking for volunteers to establish a working OS X port.

https://github.com/corporateshark/WalCommander

Smart pointers passed by const reference

Smart pointers are often passed by const references. C++ experts, Andrei Alexandrescu, Scott Meyers and Herb Sutter, speculate on this topic during C++ and Beyond 2011 ([04:34] On shared_ptr performance and correctness).

Basically, a smart pointer that is passed-in by const reference already lives in the current scope, somewhere at the call site. It may be stored in a class member and you may do something that clears that member. But this is not the problem of passing by reference, it is the problem of your architecture and ownership policy.

However, this post is not about correctness. It is about performance and what we actually can gain by switching to const references. The first impression may be that the only thing we will get is avoidance of atomic increments/decrements in copy constructor and destructor. Let’s take a closer look and write some code to understand what is going on behind the scenes.

First, some helper functions:

const size_t NUM_CALLS = 10000000;

double GetSeconds()
{
	return ( double )clock() / CLOCKS_PER_SEC;
}

void PrintElapsedTime( double ElapsedTime )
{
	printf( "%f s/Mcalls\n", float( ElapsedTime / double( NUM_CALLS / 10000000 ) )  );
}

Then an intrusive counter:

class iIntrusiveCounter
{
public:
	iIntrusiveCounter():FRefCounter(0) {};
	virtual ~iIntrusiveCounter() {}
	void    IncRefCount() { FRefCounter++; }
	void    DecRefCount() { if ( --FRefCounter == 0 ) { delete this; } }
private:
	std::atomic<int> FRefCounter;
};

And an ad hoc intrusive smart pointer:

template <class T> class clPtr
{
public:
	clPtr(): FObject( 0 ) {}
	clPtr( const clPtr& Ptr ): FObject( Ptr.FObject ) { FObject->IncRefCount(); }
	clPtr( T* const Object ): FObject( Object ) { FObject->IncRefCount(); }
	~clPtr() { FObject->DecRefCount(); }
	clPtr& operator = ( const clPtr& Ptr )
	{
		T* Temp = FObject;
		FObject = Ptr.FObject;
		Ptr.FObject->IncRefCount();
		Temp->DecRefCount();
		return *this;
	}
	inline T* operator -> () const { return FObject; }
private:
	T*    FObject;
};

Pretty simple, right?
Let’s now declare a simple class, a smart pointer to an instance of which will be passed, first, by value and then by const reference to a function:

class clTestObject: public iIntrusiveCounter
{
public:
	clTestObject():FPayload(32167) {}
	// do some dummy work here
	void Do()
	{
		FPayload++;
	}

private:
	int FPayload;
};

Everything is now ready to write the actual benchmarking code:

void ProcessByValue( clPtr<clTestObject> O ) { O->Do(); }
void ProcessByConstRef( const clPtr<clTestObject>& O ) { O->Do(); }

int main()
{
	clPtr<clTestObject> Obj = new clTestObject;
	for ( size_t j = 0; j != 3; j++ )
	{
		double StartTime = GetSeconds();
		for ( size_t i = 0; i != NUM_CALLS; i++ ) { ProcessByValue( Obj ); }
		PrintElapsedTime( GetSeconds() - StartTime );
	}
	for ( size_t j = 0; j != 3; j++ )
	{
		double StartTime = GetSeconds();
		for ( size_t i = 0; i != NUM_CALLS; i++ ) { ProcessByConstRef( Obj ); }
		PrintElapsedTime( GetSeconds() - StartTime );
	}
	return 0;
}

Let’s build it and see what happens. First, we will start with a completely unoptimized debug version (I use gcc.EXE (GCC) 4.10.0 20140420 (experimental)):

gcc -O0 main.cpp -lstdc++ -std=c++11

The run time is 0.375 s/Mcalls for the pass by value version versus 0.124 s/Mcalls for the pass by const reference version. A persuasive 3x performance difference in the debug build. That is good. Let’s take a look at the underlying assembly. The by-value version:

L25:
	leal	-60(%ebp), %eax
	leal	-64(%ebp), %edx
	movl	%edx, (%esp)
	movl	%eax, %ecx
	call	__ZN5clPtrI12clTestObjectEC1ERKS1_		// call copy ctor
	subl	$4, %esp
	leal	-60(%ebp), %eax
	movl	%eax, (%esp)
	call	__Z14ProcessByValue5clPtrI12clTestObjectE
	leal	-60(%ebp), %eax
	movl	%eax, %ecx
	call	__ZN5clPtrI12clTestObjectED1Ev			// call dtor
	addl	$1, -32(%ebp)
L24:
	cmpl	$10000000, -32(%ebp)
	jne	L25

The by-const-reference version. Notice how clean it is even in a debug build:

L29:
	leal	-64(%ebp), %eax
	movl	%eax, (%esp)
	call	__Z17ProcessByConstRefRK5clPtrI12clTestObjectE	// just a single call
	addl	$1, -40(%ebp)
L28:
	cmpl	$10000000, -40(%ebp)
	jne	L29

All the calls are in their places and what we only save here are two expensive atomic operations.
But debug builds are not what we actually want, right? Let’s optimize it and see what happens:

gcc -O3 main.cpp -lstdc++ -std=c++11

The by-value time is now 0.168 seconds per Mcalls. The by-const-reference time is ZERO. I mean it. No matter how many iterations you have, the elapsed time in this simple test sample will be zero. Let’s see the assembly to check if we are not mistaken somewhere. This is the optimized by-value version:

L25:
	call	_clock
	movl	%eax, 36(%esp)
	fildl	36(%esp)
	movl	$10000000, 36(%esp)
	fdivs	LC0
	fstpl	24(%esp)
	.p2align 4,,10
L24:
	movl	32(%esp), %eax
	lock addl	$1, (%eax)		// this is our inlined IncRefCount()...
	movl	40(%esp), %ecx
	addl	$1, 8(%ecx)			// bodies of ProcessByValue() and Do() - 2 instructions
	lock subl	$1, (%eax)		// .. and this is DecRefCount(). Quite impressive.
	jne	L23
	movl	(%ecx), %eax
	call	*4(%eax)
L23:
	subl	$1, 36(%esp)
	jne	L24
	call	_clock

Ok, but why the by-const-reference version is so much faster we cannot measure it? Here it is:

	call	_clock
	movl	%eax, 36(%esp)
	movl	40(%esp), %eax
	addl	$10000000, 8(%eax)		// here is the final result, no loops, no nothing
	call	_clock
	movl	%eax, 32(%esp)
	movl	$20, 4(%esp)
	fildl	32(%esp)
	movl	$LC2, (%esp)
	movl	$1, 48(%esp)
	flds	LC0
	fdivr	%st, %st(1)
	fildl	36(%esp)
	fdivp	%st, %st(1)
	fsubrp	%st, %st(1)
	fstpl	8(%esp)
	call	_printf

Just Wow! The complete benchmark is actually in this assembly lines. The absence of atomic hassle lets the optimizer kick in and unroll everything into a single precalculated value. Of course, this is a very trivial code sample. However, it clearly makes 2 points why passing smart pointers by const reference is not a premature optimization but a serious performance improvement:

1) elimination of atomic operations is a large benefit in itself
2) elimination of atomic ops allows the optimizer to jump in and do its magic

Here is the full source code.

Results with your compiler may vary 🙂

P.S. Herb Sutter has a very elaborate essay on the topic, covering the C++ side in great detail.