Conway’s Game of Life in GLSL/Pyglet

Conway's Game of LifeAs a followup to yesterday’s GLSL wrapper class, here is a small example of the class in use. This is an implementation of Conway’s Game of Life, running entirely on the GPU. The picture here really fails to do the game justice, as you need to see it in motion, so if you can’t run the program, drop over to YouTube for a blurry demonstration.

You will of course need to grab a copy of yesterday’s shader class, and drop it in the same directory before running this example. When the example loads you will be presented with a blank screen – resize the window by dragging the lower-right corner, and patterns will appear.

This example runs 3 simulations simultaneously, one in each of the red, green and blue colour components. Rather than setting up an initial state, it uses the garbage present in the back buffer to seed the simulation – this works fine on Mac and Windows, filling with garbage when you resize, but may not work as well with other platforms which are over-zealous about clearing memory. Also note the the simulation wraps around from top to bottom and from side to side, allowing patterns to propagate cleanly across the edges.

Although the implementation is my own, the concept of running the Game of Life with GLSL is something I have seen elsewhere. I don’t recall where, but if anyone can remind me, I would like to give credit where it is due.


# Copyright Tristam Macdonald 2008.
# Distributed under the Boost Software License, Version 1.0
# (see

import pyglet
from import *

from shader import Shader

# create the window, but keep it offscreen until we are done with setup
window = pyglet.window.Window(640, 480, resizable=True, visible=False, caption="Life")

# centre the window on whichever screen it is currently on (in case of multiple monitors)
window.set_location(window.screen.width/2 - window.width/2, window.screen.height/2 - window.height/2)

# create our shader
shader = Shader(['''
void main() {
	// transform the vertex position
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	// pass through the texture coordinate
	gl_TexCoord[0] = gl_MultiTexCoord0;
'''], ['''
uniform sampler2D tex0;
uniform vec2 pixel;

void main() {
	// retrieve the texture coordinate
	vec2 c = gl_TexCoord[0].xy;

	// and the current pixel
	vec3 current = texture2D(tex0, c).rgb;

	// count the neightbouring pixels with a value greater than zero
	vec3 neighbours = vec3(0.0);
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1,-1)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1, 0)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1, 1)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 0,-1)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 0, 1)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1,-1)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1, 0)).rgb, vec3(0.0)));
	neighbours += vec3(greaterThan(texture2D(tex0, c + pixel*vec2( 1, 1)).rgb, vec3(0.0)));

	// check if the current pixel is alive
	vec3 live = vec3(greaterThan(current, vec3(0.0)));

	// resurect if we are not live, and have 3 live neighrbours
	current += (1.0-live) * vec3(equal(neighbours, vec3(3.0)));

	// kill if we do not have either 3 or 2 neighbours
	current *= vec3(equal(neighbours, vec3(2.0))) + vec3(equal(neighbours, vec3(3.0)));

	// fade the current pixel as it ages
	current -= vec3(greaterThan(current, vec3(0.4)))*0.05;

	// write out the pixel
	gl_FragColor = vec4(current, 1.0);

# bind our shader
# set the correct texture unit
shader.uniformi('tex0', 0)
# unbind the shader

# create the texture
texture = pyglet.image.Texture.create(window.width, window.height, GL_RGBA)

# create a fullscreen quad
batch =
batch.add(4, GL_QUADS, None, ('v2i', (0,0, 1,0, 1,1, 0,1)), ('t2f', (0,0, 1.0,0, 1.0,1.0, 0,1.0)))

# utility function to copy the framebuffer into a texture
def copyFramebuffer(tex, *size):
	# if we are given a new size
	if len(size) == 2:
		# resize the texture to match
		tex.width, tex.height = size[0], size[1]

	# bind the texture
	# copy the framebuffer
	glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, tex.width, tex.height, 0);
	# unbind the texture
	glBindTexture(, 0)

# handle the window resize event
def on_resize(width, height):
	glViewport(0, 0, width, height)
	# setup a simple 0-1 orthoganal projection
	glOrtho(0, 1, 0, 1, -1, 1)

	# copy the framebuffer, which also resizes the texture
	copyFramebuffer(texture, width, height)

	# bind our shader
	# set a uniform to tell the shader the size of a single pixel
	shader.uniformf('pixel', 1.0/width, 1.0/height)
	# unbind the shader

	# tell pyglet that we have handled the event, to prevent the default handler from running
	return pyglet.event.EVENT_HANDLED

# clear the window and draw the scene
def on_draw():
	# clear the screen

	# bind the texture
	# and the shader

	# draw our fullscreen quad

	# unbind the shader
	# an the texture
	glBindTexture(, 0)

	# copy the result back into the texture

# schedule an empty update function, at 60 frames/second
pyglet.clock.schedule_interval(lambda dt: None, 1.0/60.0)

# make the window visible

# finally, run the application


  1. Thanks for the pyglet Shader class Tristam, I wasn’t really looking forward to using those ugly ctypes 🙂

    I found that, on my laptop, the above code acted strange. I got it working by changing the greaterThan tests and counting neighbours using integers…


    ivec3 neighbours = ivec3(0);
    neighbours += ivec3(greaterThan(texture2D(tex0, c + pixel*vec2(-1,-1)).rgb, vec3(0.1)));

    current += (1.0-live) * vec3(equal(neighbours, ivec3(3)));



  2. …an _annoying_ habit? 🙂 Sounds like good behaviour to me. Also vista does the same.
    For quick relief, either fill the buffer randomly (as Tristram said), or add some permanently ON cells in the shader itself…e.g.,

    if (distance(gl_FragCoord.xy,vec2(300,200)) < 100)
    gl_FragColor.rgb = 1;
    .. do the normal CA stuff…

  3. For what it’s worth, I find this runs fine on my Nvidia card (apart from being entirely black 😉 ), but crawls along totally unresponsively on a more powerful machine with an ATI card.

  4. This is awesome. It seems like it would be beneficial to implement a ShaderGroup that can be used in a batch, analogous to a TextureGroup. I’ll work on such a thing and send you a link when I have something.

    • See the comment thread with Florian, above. On Windows and Linux this tends to result in a black screen, because they clear the backbuffer on startup. You can fill the texture with some initial noise, or tweak the shader to leave some pixels always on, either of which will “fix” the issue.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s