As a followup to yesterday’s GLSL wrapper class, here is a small example of the class in use. This is an implementation of Conway’s Game of Life, running entirely on the GPU. The picture here really fails to do the game justice, as you need to see it in motion, so if you can’t run the program, drop over to YouTube for a blurry demonstration.
You will of course need to grab a copy of yesterday’s shader class, and drop it in the same directory before running this example. When the example loads you will be presented with a blank screen – resize the window by dragging the lower-right corner, and patterns will appear.
This example runs 3 simulations simultaneously, one in each of the red, green and blue colour components. Rather than setting up an initial state, it uses the garbage present in the back buffer to seed the simulation – this works fine on Mac and Windows, filling with garbage when you resize, but may not work as well with other platforms which are over-zealous about clearing memory. Also note the the simulation wraps around from top to bottom and from side to side, allowing patterns to propagate cleanly across the edges.
Although the implementation is my own, the concept of running the Game of Life with GLSL is something I have seen elsewhere. I don’t recall where, but if anyone can remind me, I would like to give credit where it is due.
# # Copyright Tristam Macdonald 2008. # # Distributed under the Boost Software License, Version 1.0 # (see http://www.boost.org/LICENSE_1_0.txt) # import pyglet from pyglet.gl import * from shader import Shader # create the window, but keep it offscreen until we are done with setup window = pyglet.window.Window(640, 480, resizable=True, visible=False, caption="Life") # centre the window on whichever screen it is currently on (in case of multiple monitors) window.set_location(window.screen.width/2 - window.width/2, window.screen.height/2 - window.height/2) # create our shader shader = Shader([''' void main() { // transform the vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; // pass through the texture coordinate gl_TexCoord[0] = gl_MultiTexCoord0; } '''], [''' uniform sampler2D tex0; uniform vec2 pixel; void main() { // retrieve the texture coordinate vec2 c = gl_TexCoord[0].xy; // and the current pixel vec3 current = texture2D(tex0, c).rgb; // count the neightbouring pixels with a value greater than zero vec3 neighbours = vec3(0.0); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1,-1)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1, 0)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1, 1)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 0,-1)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 0, 1)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1,-1)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1, 0)).rgb, vec3(0.0))); neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1, 1)).rgb, vec3(0.0))); // check if the current pixel is alive vec3 live = vec3(greaterThan(current, vec3(0.0))); // resurect if we are not live, and have 3 live neighrbours current += (1.0-live) * vec3(equal(neighbours, vec3(3.0))); // kill if we do not have either 3 or 2 neighbours current *= vec3(equal(neighbours, vec3(2.0))) + vec3(equal(neighbours, vec3(3.0))); // fade the current pixel as it ages current -= vec3(greaterThan(current, vec3(0.4)))*0.05; // write out the pixel gl_FragColor = vec4(current, 1.0); } ''']) # bind our shader shader.bind() # set the correct texture unit shader.uniformi('tex0', 0) # unbind the shader shader.unbind() # create the texture texture = pyglet.image.Texture.create(window.width, window.height, GL_RGBA) # create a fullscreen quad batch = pyglet.graphics.Batch() batch.add(4, GL_QUADS, None, ('v2i', (0,0, 1,0, 1,1, 0,1)), ('t2f', (0,0, 1.0,0, 1.0,1.0, 0,1.0))) # utility function to copy the framebuffer into a texture def copyFramebuffer(tex, *size): # if we are given a new size if len(size) == 2: # resize the texture to match tex.width, tex.height = size[0], size[1] # bind the texture glBindTexture(tex.target, tex.id) # copy the framebuffer glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, tex.width, tex.height, 0); # unbind the texture glBindTexture(tex.target, 0) # handle the window resize event @window.event def on_resize(width, height): glViewport(0, 0, width, height) # setup a simple 0-1 orthoganal projection glMatrixMode(GL_PROJECTION) glLoadIdentity() glOrtho(0, 1, 0, 1, -1, 1) glMatrixMode(GL_MODELVIEW) # copy the framebuffer, which also resizes the texture copyFramebuffer(texture, width, height) # bind our shader shader.bind() # set a uniform to tell the shader the size of a single pixel shader.uniformf('pixel', 1.0/width, 1.0/height) # unbind the shader shader.unbind() # tell pyglet that we have handled the event, to prevent the default handler from running return pyglet.event.EVENT_HANDLED # clear the window and draw the scene @window.event def on_draw(): # clear the screen window.clear() # bind the texture glBindTexture(texture.target, texture.id) # and the shader shader.bind() # draw our fullscreen quad batch.draw() # unbind the shader shader.unbind() # an the texture glBindTexture(texture.target, 0) # copy the result back into the texture copyFramebuffer(texture) # schedule an empty update function, at 60 frames/second pyglet.clock.schedule_interval(lambda dt: None, 1.0/60.0) # make the window visible window.set_visible(True) # finally, run the application pyglet.app.run()
Thanks for the pyglet Shader class Tristam, I wasn’t really looking forward to using those ugly ctypes 🙂
I found that, on my laptop, the above code acted strange. I got it working by changing the greaterThan tests and counting neighbours using integers…
e.g.,
ivec3 neighbours = ivec3(0);
neighbours += ivec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1,-1)).rgb, vec3(0.1)));
…
current += (1.0-live) * vec3(equal(neighbours, ivec3(3)));
etc…
Cheers,
b
Thanks for that! The integer operations appear a little flaky with my integrated GPU – probably another issue with the Mac drivers.
on pyglet trunk on ubuntu 7.10, all you get is a black.
That is pretty much expected – the X server has an annoying habit of clearing buffers before handing them to the application. You would need to fill the texture or frame buffer before starting the render loop.
I also get only a black screen. Pyglet version is 1.1.3 on Linux.
Oh, sorry. I haven’t read the above response.
Ja, I have been meaning to re-make this example at some point. Probably when I get around to polishing my new and improved GLSL wrapper 😉
…an _annoying_ habit? 🙂 Sounds like good behaviour to me. Also vista does the same.
For quick relief, either fill the buffer randomly (as Tristram said), or add some permanently ON cells in the shader itself…e.g.,
if (distance(gl_FragCoord.xy,vec2(300,200)) < 100)
gl_FragColor.rgb = 1;
else
.. do the normal CA stuff…
For what it’s worth, I find this runs fine on my Nvidia card (apart from being entirely black 😉 ), but crawls along totally unresponsively on a more powerful machine with an ATI card.
This is awesome. It seems like it would be beneficial to implement a ShaderGroup that can be used in a batch, analogous to a TextureGroup. I’ll work on such a thing and send you a link when I have something.
Hey pretty awesome stuff… found this linked off off the pyglet mailing list. Looks like I have a lot of learning ahead of me :P.
BTW, check out: http://mikeash.com/software/gpulife/
Similar concept, but it’s a screensaver, and only does one simulation.
I’m on windows, but this doesn’t work.
See the comment thread with Florian, above. On Windows and Linux this tends to result in a black screen, because they clear the backbuffer on startup. You can fill the texture with some initial noise, or tweak the shader to leave some pixels always on, either of which will “fix” the issue.