A Simple C++ OpenGL Shader Loader – Improved

I wrote one back in 2013, but I’ve learnt some things since then, so I’ve re-written it all with some changes. You can still load/compile shaders from strings or files, and then link/validate/use the resulting shader programs, only this time its all a little bit cleaner and more robust.

The main deficiencies of the previous incarnation were:

  • The Shader class wasn’t required for what I was using this stuff for, so I stripped it out – now you just create a ShaderProgram directly,
  • Error logs weren’t shown for any shader or shader program errors,
  • The shader program didn’t get validated,
  • Abuse of exit(-1) rather than throwing unchecked runtime_errors,
  • Inclusion of using clauses in headers, and
  • Possible wonky static declaration and initialisation of const_iterator error in uniform location getter meant that it might have only “got” the uniform location correcty on first run, when the const_iterator was declared – but in practice I haven’t seen any issues, perhaps because I spotted and fixed it but forgot to update article though…

Anyway, below is an improved version that fixes all of the above – here’s how you use it…

Usage

ShaderProgram Class Source Code

Here’s the code for the class itself:

Note: I’m not convinced that the compiler will even consider inlining the use() and disable() methods when the program is built as a hpp – I think I’d have to break it into .h and .cpp files for that… other than that I’m thinking the above code is pretty clean, robust and usable.

A Simple C++ OpenGL Shader Loader

Update: There’s a re-worked and improved version of this shader loading code here: http://r3dux.org/2015/01/a-simple-c-opengl-shader-loader-improved/ – you should probably use that instead of this.


I’ve been doing a bunch of OpenGL programming recently and wanted to create my own shader classes to make setting up shaders as easy as possible – so I did ;-) To create vertex and fragment shaders and tie them into a shader program you can just import the Shader.hpp and ShaderProgram.hpp classes and use code like the following:

There’s also a loadFromString(some-string-containing-GLSL-source-code) method, if that’s your preference.

The ShaderProgram class uses a string/int map as a key/value pair, so to add attributes or uniforms you just specify their name and they’ll have a location assigned to them:

The ShaderProgram class then uses two methods called attribute and uniform to return the bound locations (you could argue that I should have called these methods getAttribute and getUniform – but I felt that just attribute and uniform were cleaner in use. Feel free to mod if you feel strongly about it). When binding vertex attribute pointers you can use code like this:

Finally, when drawing your geometry you can get just enable the shader program, provide the location and data for bound uniforms, and then disable it like this (I’m using the GL Mathematics library for matrices – you can use anything you fancy):

That’s pretty much it – nice and simple. I haven’t done anything with geometry shaders yet so I’ve no idea if there’s anything else you’ll need, but if so it likely won’t be too tricky a job to implement it yourself. Anyways, you can look at the source code for the classes themselves below, and I’ll put the two classes in a zip file here: ShaderHelperClasses.zip.

As a final note, you can’t create anything shader-y without having a valid OpenGL rendering context (i.e. a window to draw stuff to) or the code will segfault – that’s just how it works. The easiest way around this if you want to keep a global ShaderProgram object around is to create it as a pointer (i.e. ShaderProgram *shaderProgram;) and then initialise it later on when you’ve got the window open with shaderProgram = new ShaderProgram(); like I’ve done above.

Cheers! =D

Source code after the jump…

OpenGL – A Modified Phong-Blinn Light Model For Shadowed Areas

I was looking through the book Graphics Programming Methods (2003) the other day when I came across a paper titled “A Modified Phong-Blinn Light Model for Shadowed Areas” and thought it was pretty good, so I implemented it. It’s a really short paper, but what’s it’s basically saying is that by not allowing ambient light to be modified by geometry normals, any characteristics of the geometry are lost because we calculate the diffuse intensity as the dot product of the light location and the surface normal and limit the result between 0 and 1 (i.e. we only take diffuse lighting into consideration for surfaces facing our light source).

What the paper proposes instead is that we allow the dot product to run as -1.0 to +1.0, and when the value is within the range 0.0 to 1.0 (i.e. facing the light) we shade as usual, but when it’s -1.0 to 0.0 (facing away from the light) we calculate the lighting as the ambient light plus the negative diffuse intensity multiplied by the ambient light multiplied by q, where q is a value between 0.0 and 1.0. When q is 0.0, it’s like we’re using a traditional Phong-Blinn model, when it’s 1.0 we’re using the fully modified version, and any value in between is a sliding-scale combination of them.

To put that another way, if our surface is facing away from the light source, we use: Lighting = Ambient + (Diffuse * Ambient * q)

In the following images, the light source is roughly in line with the geometry on the Y and Z axis’, and is a way over on the positive X axis (i.e. to the right). Check out the difference…

Standard lighting with q = 0.0
Standard lighting with q = 0.0 - notice that the inside right area of the torus is getting no diffuse lighting, and all detail is lost to the ambient lighting.
Improved lighting with q = 0.5
Improved lighting with q = 0.5 - there's now some detail in the shadowed area on the inside-right of the torus
Improved lighting with q = 1.0
Improved lighting with q = 1.0 - there's now significantly more detail in the shadowed area on the inside-right of the torus

To create this effect, I used the following fragment shader:

Modified Phong-Blinn Light Model Fragment Shader

Credits for this method go to Anders Hast, Tony Barrera, and Ewer Bengtsson – good work, fellas! =D

Simple OpenGL FBO Textures

I was playing around with FBOs and rendering to textures the other week and came up with this. A spinning yellow torus is rendered to a texture, then a second spinning torus is textured using the texture we just created of the first spinning torus… Yeah, I need to stop using donuts as my test objects.

Full source code & shaders after the jump…

Continue reading Simple OpenGL FBO Textures

Anaglyphic 3D in GLSL

I’ve been playing around with getting some red/cyan stereoscopic 3D working of late, and it’s all turned out rather well – take a look… (Red/Blue or Red/Cyan glasses are required for the effect to work):

Anaglyphic 3D in GLSL
Click for bigger version.

If you’ve got suitable glasses you should definitely see a 3D effect, although I don’t think me rescaling the image has done it any favours – so click the image to see the full sized version for the full effect.

The trick to this has been to render the scene twice to two separate FBO textures, then sample from the left and right textures to draw a fullscreen quad with a combined version as follows:

In the code itself, the torus’ spin around on the spot and look pretty good, although there’s no anti-aliasing as yet as I need to create some multisample buffers instead of straight/normal buffers for the FBO, but it’s not decided to play ball just yet – not to worry though, the hard part’s done and I’m sure multisampling will be sorted in a day or so. After that, I might give ColorCode3D (TM)(R)(C)(Blah) a go, as it seems to give a better colour representation whilst still allowing the same amount of depth as traditional anaglyphic techniques. Also, I’ve got to start using asymmetric frustums for the projection to minimise the likelihood of eye-strain, but I don’t see that as being too much of a problem.

Good times! =D