Rendering a 3D textured cube with hwoa-rang-gl
I recently open sourced my WebGL framework that I have been working on for the last couple of months. I’ve been using three.js to make 3D for a long time, but as a very high level library with large footprint, many things are hidden to the developer. If one wants more control over the rendering pipeline or implementing specific effects, it can be very cumbersome and tricky to manage, especially while keeping the performance high.
On the other hand, writing raw WebGL can be a real pain with its state machine and verbose API. Hence I decided to try to bridge the gap with hwoa-rang-gl. It aims to be compact and provide the minimum set of classes and methods to be productive and get your idea off the ground.
Parts of it can easily be mixed or replaced with raw WebGL code. For example, you don’t have to use it’s Framebuffer
class, you can create one yourself using gl.createFramebuffer
and mix it with the rest of the code . Same applies for other WebGL objects like Texture
, Program
, etc. You can see the full docs here.
This tutorial aims to serve as introduction to core concepts of the library by rendering a simple 3D scene with a spinning shaded cube. You can find the finished demo here.
1. App Skeleton
Again, hwoa-rang-gl tries to keep things simple and does not introduce opinionated rules when it comes to structuring your program. Let’s kick it off by creating a HTMLCanvasElement
and corresponding WebGLRenderingContext
, sizing our canvas and viewport and starting an animation loop, very much like we would if we were to write a plain WebGL program:
// Create a HTMLCanvas and obtain WebGLRenderingContext
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl') || c
canvas.getContext('experimental-webgl')// Size our canvas and append it to the DOM
canvas.width = innerWidth * devicePixelRatio
canvas.height = innerHeight * devicePixelRatio
canvas.style.width = `${innerWidth}px`
canvas.style.height = `${innerHeight}px`
document.body.appendChild(canvas)// Set the viewport size
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight)// Enable depth testing
gl.enable(gl.DEPTH_TEST)// Set the background color
gl.clearColor(0.9, 0.9, 0.9, 1.0)// Start our animation loop
requestAnimationFrame(renderFrame)function renderFrame (ts) {
// Clear the color and depth buffers on each render tick
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT) // Schedule new animation loop callback
requestAnimationFrame(renderFrame)
}
With this code we will get an empty scene with grayish background color repainted on each frame.
2. Perspective Camera
Let’s introduce the library by installing it withnpm i hwoa-rang-gl
and importing it at the top of our file:
import * as HwoaRangGL from 'hwoa-rang-gl'
Let’s create our perspective camera:
const fieldOfViewRadians = 45 * Math.PI / 180
const aspect = innerWidth / innerHeight
const near = 0.1
const far = 100
const perspCamera = new HwoaRangGL.PerspectiveCamera(
fieldOfViewRadians,
aspect,
near,
far
)
// Position our camera in the 3D world
perspCamera.position = [10, 4, 4]
// Look at the center of our 3D scene
perspCamera.lookAt([0, 0, 0])
2. The 3D cube
We will place a cube in the center of our 3D world. It will be 1 unit in width, height and depth. Let’s start by constructing its needed geometry by calling HwoaRangGL.GeometryUtils.createBox()
and supplying the result vertices, uvs, normals and indices to the base Geometry
class:
// Create the geometry needed for our cube
const {
vertices,
uv,
normal,
indices
} = HwoaRangGL.GeometryUtils.createBox({
width: 1,
height: 1,
depth: 1
})const geometry = new HwoaRangGL.Geometry(gl)
.addIndex({ typedArray: indices })
.addAttribute('position', { typedArray: vertices, size: 3 })
.addAttribute('uv', { typedArray: uv, size: 2 })
.addAttribute('normal', { typedArray: normal, size: 2 })
We can use the generated geometry and supply it to a Mesh
class, which will be responsible for painting it on the screen. We need to provide the geometry along with our vertex & fragment shaders. This part of the API was heavily inspired by theShaderMaterial
class in three.js. Just like there, hwoa-rang-gl also provides the projectionMatrix
viewMatrix
and modelMatrix
uniforms by default to every Mesh
vertex shader, so you don’t have to manage them yourself.
const mesh = new HwoaRangGL.Mesh(gl, {
geometry,
uniforms: {},
vertexShaderSource: `
attribute vec4 position;
attribute vec2 uv; varying vec2 v_uv; void main () {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
v_uv = uv;
}
`,
fragmentShaderSource: `
precision highp float; varying vec2 v_uv; void main () {
gl_FragColor = vec4(v_uv, 0.0, 1.0);
}
`
})
Finally we need to augment our renderFrame
method to actually draw our cube:
function renderFrame (ts) {
// Clear the color and depth buffers on each render tick
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT) // 👉 Paint our mesh to the screen
mesh
// Bind it's program as active
.use()
// Incrementally rotate it around the Y axis on each loop
.setRotation({ y: ts / 1000 })
// Provide camera to render it with
.setCamera(perspCamera)
// Issue a draw command
.draw() // Schedule new animation loop callback
requestAnimationFrame(renderFrame)
}
With this new code, upon reloading our browser we are greeted with a spinning cube using its UV coordinates as colors:
3. Texturing our cube
I am going to use the WebGL logo as a texture to our cube. I resized it to power-of-two 512x512 box in Photoshop so we can benefit from mipmapping and exported it as a .png file.
Here is the needed code to load the image and use it as a texture:
// We initialize our texture as empty 1x1 so we can start using it
// immediately and not have to wait for the image to load to
// render our scene
const texture = new HwoaRangGL.Texture(gl, {
minFilter: gl.LINEAR_MIPMAP_LINEAR
})
.bind()
.fromSize(1, 1)const img = new Image()
img.onload = () => {
texture
.bind()
.setIsFlip(1)
.fromImage(img)
.setAnisotropy(8)
.generateMipmap()
}
img.src = 'assets/webgl-logo.png'
Let’s augment our Mesh
and its corresponding fragment shader to account for the texture object:
const mesh - new Mesh({
geometry,
uniforms: {
// 👇
// Provide the slot our texture will be bound to
// as a integer uniform
texture: {
type: HwoaRangGL.UNIFORM_TYPE_INT,
value: 0
}
},
vertexShaderSource: `/* Same as before */`,
fragmentShaderSource: `
precision highp float; uniform sampler2D texture;
varying vec2 v_uv;
void main () {
gl_FragColor = texture2D(texture, v_uv);
}
`
})
Finally, we need to bind our texture to gl.ACTIVE_TEXTURE0
before rendering the mesh like so:
function renderFrame(ts) {
// ... // 👉 Bind our texture to slot 0
gl.activeTexture(gl.TEXTURE0)
texture.bind() // Paint our mesh to the screen
mesh
// Bind it's program as active
.use()
// Incrementally rotate it around the Y axis on each loop
.setRotation({ y: ts / 1000 })
// Provide camera to render it with
.setCamera(perspCamera)
// Issue a draw command
.draw()}
With this code we get this result on the device screen:
4. Basic shading
As a last step, let’s add some shading via directional lighting. The principles of directional lighting are out of the scope of this tutorial, but you can find a good article here.
Let’s add a new lightDirection
vec3 uniform and update our shaders to consume the normal buffer from our geometry:
const mesh = new HwoaRangGL.Mesh(gl, {
geometry,
uniforms: {
texture: {
type: HwoaRangGL.UNIFORM_TYPE_INT,
value: 0
},
// 👉 Light direction as a vec3 uniform
lightDirection: {
type: HwoaRangGL.UNIFORM_TYPE_VEC3,
value: [1, 1, 0.5]
}
},
vertexShaderSource: `
attribute vec4 position;
attribute vec2 uv;
attribute vec3 normal; varying vec2 v_uv;
varying vec3 v_normal; void main () {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
v_uv = uv; // 👇
// Make sure we pass transform our normal and pass it to
// the fragment shader
v_normal = mat3(modelMatrix) * normal;
}
`,
fragmentShaderSource: `
precision highp float; uniform sampler2D texture;
// 👉 Pass light direction as uniform
uniform vec3 lightDirection; varying vec2 v_uv;
varying vec3 v_normal; void main () {
gl_FragColor = texture2D(texture, v_uv); // 👉 Basic directional lighting
vec3 normal = normalize(v_normal);
float light = dot(normal, lightDirection);
gl_FragColor.rgb *= light;
}
`
})
And here is the result:
Conclusion
While already being able to render complex scenes, this library is still an ongoing project. I have developed it for my needs and will continue iterating over it where new needs might arise from new projects.
I hope I managed to give you a glimpse into what is possible when using the library and you will give it a try yourself :)