Thursday, May 7, 2020

Adjusting the Apex Angle

Finally, we want to be able to change the aspect ratio on the cones, that is, how their base compares to their height.  We are going to control this by limiting the scroll wheel action of moving backwards and forwards to the area around the origin and allowing the same gesture further away to cause the radius to expand and contract.

Calculating the Ratio 

The first thing I tried was to simply update the radius on the cone but, predictably enough, once you have created the cone it stays created.  I suppose it would be possible to remove the cones and create new ones, but that seemed excessive.  Instead, what we are going to do is to scale the cone but not symmetrically: we are only going to scale it in the x and z directions.  Because scaling is the first operation to be performed in our update sequence, the base of the cone will always be in the xz plane and so scaling in those directions is sufficient.

In order to make this work we need to keep track of the "current intended radius" and express the scaling as a ratio between that and the "actual radius" of the cone.  We hardcoded that to 20 earlier, so our first step is to extract that; at the same time we'll extract the height of the cone which is used in a number of different places in the code.
var rad = 20;
var baserad = 20;
var ht = 50;

Identifying the cases

Then we need to identify whether we want to continue using the current event handling - moving the cones backwards and forwards - or the new apex angle adjustment based on the mouse location.  The mouse events provided are based on the canvas coordinates, but have been adjusted to reflect the origin.  Thus by saying "close to center" we really do mean something like within a 60-pixel radius of (0, 0).  This function uses Pythagoras' theorem to determine that:
function closeToCenter(e) {
  var r = e.x * e.x + e.y * e.y;
  return r < 3600;
}

The Updated Event Handler

Finally, we can implement the updated event handler.  The existing logic to update the depth is now made conditional on the event being sufficiently close to the center.  The new logic that is applied in the other case updates the "current radius" and then scales both the x and z dimensions based on the ratio of the "current radius" to the original "base radius". 
if (closeToCenter(e)) {
  depth -= e.wheel;
} else {
  rad -= e.wheel;
  top.scale.x = rad / baserad;
  top.scale.z = rad / baserad;
  bottom.scale.x = rad / baserad;
  bottom.scale.z = rad / baserad;
}
Either way, the existing code to actually update the object's matrix will be called once the new values have been calculated.

Summary

That finishes up our conic section viewer.  This post is tagged SHOW_CONES_INTERSECT_PLANE.

The viewer is not as slick or perfect as I would like, but it does work.  More importantly, I now feel I have a handle on what it takes to build WebGL applications with PhiloGL.

Since there is a lot more to WebGL than this, I may well be back to try and dig into additional topics such as custom rendering, blending and the like which seem to require a deeper understanding of the WebGLContext, and possibly even some embedded script programming.

Showing the Intersect Plane

What we really need to be able to do is to visualize the intersect plane, that is, the curve that results by finding all the points on these cones that have the value z = 0.  I've tried and failed a couple of approaches: I was unable to have two WebGL applications showing in different canvases; and, while you weren't looking, I tried adding a plane to the model but the results weren't adequate.

So what we are going to do now is to add another event handler - for click this time - and this will switch between the "3D cone" and "2D conic section" modes.  All the rest of the functionality will work in both modes; you are just somewhat "in the dark" about what you are doing when you are in "intersect" mode.

Showing the Intersect

The basic approach we are going to use for this is to instruct the camera to only show a small amount of the full 3D space; everything else is hidden.  The camera has the feature to ignore everything that is too close or two far away, thus allowing the appearance of X-ray vision.  But by limiting our view to just around the z = 0 plane, we are able to see the intersection of the curve and the plane.  This is exactly what we were originally going to do in the right hand canvas; it's just that we are now doing it "modally" rather than in parallel; time-slicing rather than window-slicing if you like.

Because we have defined the camera to be at z = 99, we would like to define the view to be exactly 99.  However, when we do that, nothing shows up.  Instead, we have to define it to be "a little bit" either side of that exact amount.  The consequence of this is that the lines are not perfect lines but "a little bit fat" but this is just a demonstration, so I'm not that bothered.

We start by introducing a variable to remind us of which mode we are in:
var showIntersect = false;
Then we add an event handler to change between the modes:
onClick: function(e) {
  showIntersect = !showIntersect;
  if (showIntersect) {
    this.camera.near = 98.75;
    this.camera.far = 99.25;
  } else {
    this.camera.near = 10;
    this.camera.far = 200;
  }
  this.camera.update();
}
This first handles the event by inverting the value of showIntersect.  Then, if the value is now true, it limits the camera range to a very small window around the z axis (from z = -0.25 to z = 0.25), causing something approximating the intersection plane to be shown.  If the value is now false, it goes back to the approximate defaults of showing a wide range.

As with everything else, it is not sufficient to update these variables; it is necessary to call update() (on the camera this time), and then the next redraw event will change the image on the screen.

Summary

Although my previous attempts had met with failure, this approach to showing the intersect plane, while not perfect, was remarkably easy to implement.  The code can be checked out at SHOW_CONES_INTERSECT_PLANE.

Moving the Cones Backwards and Forwards

The next step in our plan is to allow the cones to be moved backwards and forwards along the z-axis.

A Quick Refactoring

Although it was perfectly adequate for what we were trying to achieve, the rot method in the previous post in fact bundled two responsibilities: one was the actual rotation, based on mouse position, and the other was to update the matrix based on the current rotation.

In order to incorporate the new event, we need to split those responsibilities.  So let's create a new update function at the top level and call that from within our rot function.
function update(comp) {
  var m = comp.matrix;
  m.id();
  m.$rotateXYZ(comp.rotation.x, comp.rotation.y, comp.rotation.z);
  m.$translate(comp.position.x, comp.position.y, comp.position.z);
  m.$scale(comp.scale.x, comp.scale.y, comp.scale.z);
}

The Event Handler

The next step is to track the scroll events and remember how far in or out of the screen the cones currently are.  We can do this by adding an event handler for onMouseWheel in the same way that the moon example did, and then tracking the current depth and updating both cones.

At the outermost scope within the loadConics() function, we can declare a current depth, i.e. how far the cones are in or out.  Note that it is very important to initialize this variable because otherwise the mathematics that depends upon it will fail and nothing will render.
var depth = 0;
And then in the events section, we can add the handler that updates it:
onMouseWheel: function(e) {
  e.stop();
  depth -= e.wheel;
  update(top);
  update(bottom);
}
Finally, we need to use this value in our update method.  Going back to what we said about order of operations in the last post, it's very important that this translation is the last thing that is done.  How do we manage that while at the same time insisting that the y translation is done before the rotation?  Well, quite simply, we are allowed to call translate (and indeed any of the operations) multiple times.  We invoke translate twice, once for the y translation and once for the z translation.  Of course, these are still written "backwards", so we end up with this:
function update(comp) {
  var m = comp.matrix;
  m.id();
  m.$translate(0, 0, depth);
  m.$rotateXYZ(comp.rotation.x, comp.rotation.y, comp.rotation.z);
  m.$translate(comp.position.x, comp.position.y, comp.position.z);
  m.$scale(comp.scale.x, comp.scale.y, comp.scale.z);
}
Now we can move the cones in and out of the screen.  This is tagged MOVE_CONES_BACKWARDS_FORWARDS.

Summary

With a small refactoring, we have been easily able to move the cones in and out of the screen.

Rotating the Cones

Our second step is to allow the user to drag the screen around causing the cones to rotate.  Since we have most of this code already, this should be simple right?

Sadly, not.  I think I knew at the time that the complexities of positioning, rotation and the fact that the center of the cone was not the point about which I wished to rotate them would come back to bite me.

Check out the tag ROTATE_CONES_WRONG_CENTER to see the details, but this is not going to do what we want but is going to rotate the two cones independently about their individual centers.  But it does get the generic event handler code out of the way.

First things first

The first thing we need to do is to promote the variables top and bottom from being scoped inside onLoad to being scoped across the entirety of the setup function.

Then we need a variable to track where the drag started so that we can update based on the relative mouse position.

So, at the top this looks like:
function loadConics() {
  var dragStart;
  var top, bottom;
And in onLoad we have:
top = cone(25, Math.PI);
bottom = cone(-25, 0);

Adding the Event Handlers

Again using the moon example as our model, we can add two event handlers for the dragging case.  The first one handles the start of the drag and remembers the starting position:
onDragStart: function (e) {
  dragStart = { x: e.x, y: e.y }
},
The second one handles the case when the mouse has been dragged.  Here we need to calculate the apparent rotation for the movement, update the cones and then remember the new "current" position.
onDragMove: function(e) {
  top.rotation.y += -(e.x - dragStart.x) / 100;
  top.rotation.x += -(e.y - dragStart.y) / 100;
  top.update();
  bottom.rotation.y += (e.x - dragStart.x) / 100;
  bottom.rotation.x += (e.y - dragStart.y) / 100;
  bottom.update();
  dragStart.x = e.x;
  dragStart.y = e.y;
}
This may look like I've really thought about how this is going to work, but nothing could be further from the truth.  I basically just copied the code from the moon example and then tinkered with it until it worked the way I expected.

Fairly obviously, we want the change to be proportional to the amount that the mouse has moved.  Almost equally obviously, we want the rotation about the y-axis to be correlated to the horizontal (x) movement of the mouse and the rotation about the x-axis to be correlated to the vertical (y) movement of the mouse.

Beyond that, the random constant 100 is proportional to the full travel of the mouse, which is 500 pixels in the defined canvas; that gives us the ability to rotate 5 radians, which is just short of 2π for a full rotation.  The signs I determined experimentally based on my intuition of what should happen: when it didn't, I reversed the most likely sign.

The update() calls are required to update the matrix, and then the final two lines simply remember the current position.

Animation

In order for this to actually work, we need to add a "frame loop".  Every time we draw the scene, we need to request that it be drawn again.  This is what the requestAnimationFrame() method does.
function draw() {
  ...
  PhiloGL.Fx.requestAnimationFrame(draw);
}

The Problem

The problem is that I want the whole scene to rotate about the origin.  But that's not what's happening: each cone is rotating about its own center, which, as we discovered before, it not at the base or the tip but in the center of the central axis.  This means that the two cones stop touching the moment you rotate them.  To fully appreciate the experience, you'll need to check out the broken code, but here's a sample:
The root of the problem, of course, is that matrix operations are not commutative: that is, that the order in which they are performed makes a difference to the outcome.  I'm not going to go into the details here (consult a matrix textbook) but you can perform a simple experiment with a piece of paper:
  • First, hold it up vertically in front of you.
  • Then rotate about the x-axis by 90º so that it falls "away" from you and lies flat.
  • Now rotate it about the y-axis by 90º so that the part that is further away from you moves to the left.
The paper should be horizontal with its top to your left hand side.
  • Now, hold it vertically again.
  • First rotate it about the y-axis by 90º so that the right hand side moves away from you.
  • Now rotate it about the x-axis by 90º so that the top moves away from you.
The paper should now be vertical but on its side with its top furthest from you.

The only difference here is the order of operations.

The update() Method

When we call update() on our cones, internally it is taking the rotation, scale and translation we have applied and performing them in a particular order.  Specifically, it is rotating and then translating.  We don't want that: we want to translate first.

Let's look at the code:
update: function() {
  var matrix = this.matrix,
  pos = this.position,
  rot = this.rotation,
  scale = this.scale;

  matrix.id();
  matrix.$translate(pos.x, pos.y, pos.z);
  matrix.$rotateXYZ(rot.x, rot.y, rot.z);
  matrix.$scale(scale.x, scale.y, scale.z);
}
Now, while this appears to do the translation first, that is again because of the properties of matrix math: the first operation specified is the last one "to take effect".  So this is scaling first, then rotating, then translating.

We simply want to do our translation and rotation in the other order, so we'll copy this code into ours, simplify and refactor what's left and update everything.  So we end up with this:
onDragMove: function(e) {
  rot(top, e, -1, -1);
  rot(bottom, e, 1, 1);
  dragStart.x = e.x;
  dragStart.y = e.y;

  function rot(comp, e, sgnx, sgny) {
    comp.rotation.y += sgnx * (e.x - dragStart.x) / 100;
    comp.rotation.x += sgny * (e.y - dragStart.y) / 100;

    var m = comp.matrix;
    m.id();
    m.$rotateXYZ(comp.rotation.x, comp.rotation.y, comp.rotation.z);
    m.$translate(comp.position.x, sgny*comp.position.y, comp.position.z);
    m.$scale(comp.scale.x, comp.scale.y, comp.scale.z);
  }
}
I'm not going to comment further on that code, since other than the changing of operation order it is just a series of refactorings from what we had before.  To check out all the code, use the tag ROTATE_CONES_SCENE_CENTER.

Summary

With a small hiccough, we managed to rotate the cones around the scene origin.

3D Conic Sections

You may or may not know what Conic Sections are.  They are a somewhat advanced topic in mathematics which describe a whole bunch of algebraic equations and relationships derived from a geometrical model. Basically, you have two cones of the same size placed on top of each other and then slice through them with any plane.

The downside is that it's almost invariably very hard to visualize what is going on.  In general, I recommend (conical, obviously) party hats at a sandy beach, but I thought that it would be a good thing to use to dig my teeth into WebGL before going onto the things that I more want to actually achieve.

Consider this picture from the referenced Wikipedia article:
It's not wonderfully clear what's going on.  But imagine if you will that instead of the cones staying still the plane stayed still as the surface of the screen.  You could then rotate, move and adjust the  cones and see the effects of your actions on the conic sections.  Moreover, by cunning use of the camera "near and far" options, I'm hoping that we can show just the resultant curve.

The Plan

This blog is meant to be about experimentation - but I'll be honest.  Often by the time I write things up here the experiments are complete and I know what worked and what didn't.  But on this occasion I'm really going in blind.  So let's have a plan:
  • Let's create a basic HTML outline with the two canvases and a simple model (of two cones) standing vertically as in the picture above.  I'll try to set the camera up correctly on the right hand one so that it just shows a pair of crossed lines (the degenerate case).
  • Then let's add a simple drag event as with the moon that enables the cones to be rotated while continuing to have their origin on the plane (which will continue to be a degenerate case).
  • Then let's add scroll or zoom events near the origin which enable us to move it back into the screen or forward out of the screen.
  • Allow full control by allowing a scroll or zoom event away from the origin to change the apex angle of the cones.
So I'm just going to wander off and create a project and some code.  I'll be back in a little while with a tag for you to check out to follow along; I'm going to quote some parts of the code that I find interesting, but if you want to see the whole thing you'll need to check out the repo.

Create a basic outline

I suspected that wasn't as easy as it looked like it might be.  You can check out the "current" state of affairs by checking out the tag PHILO_CONIC_BASIC_OUTLINE.

The first casualty was the thought that we could use two canvases.  I was thinking that we could just call the PhiloGL constructor twice (once for each canvas) and everything would be fine.  It turns out that when PhiloGL describes the "application" as an application, it means just that: it is the application covering the entire Javascript context.  I don't know (and didn't investigate) whether this limitation is due to PhiloGL, the underlying model or the underlying hardware.  Neither did I consider mitigation strategies - such as using an iframe.  Instead, I just chose a different strategy.  When the time comes we will try to place a semi-opaque dark square across the entirety of the z = 0 plane which I think should do enough of a trick.

Boilerplate

So the first thing that I did was to set up the outermost boilerplate: the HTML file and the same structure of a Javascript file with a nested PhiloGL application constructor call.  Within that, I have a fairly generic camera definition, a simple texture definition (which I will return to later), an onError method and an onLoad method which creates and renders a scene consisting of two cones.

Creating a Cone

Because I'm creating two cones, I decided to place the construction in a function:
function cone(yd, rot) {
  var ret = new PhiloGL.O3D.Cone({
    radius: 20,
    height: 50,
    nradial: 20,
    textures: ['img/balloons.png']
  });
  ret.position.y = yd;
  ret.rotation.set(0, 0, rot);
  ret.update();
  return ret;
}
I know what you're thinking: that looks complicated just to create a cone.  Bear with me.

The actual cone creation is quite simple.  It would seem that all the arguments are in fact options but you probably want to specify the radius and the height - the ratio of these two gives you the "slope" of the cone.

Because all shapes in WebGL are in fact just compound triangles, I found that the default cone rendering (with nradial set to 10) looked a bit "sharp" rather than rounded and found (by experiment) that 20 was the sweet spot (obviously the more you have the slower the render).

I had a lot of issues with the texture and I still don't really understand them; I'll come back to that in a bit.

Now, as I've said, I want two cones meeting at the origin: one facing up and one facing down.  When I first rendered the first cone, I was surprised to see that it was centered in the screen rather than (for want of a better word) sitting on the x-axis.

It would seem that the designers decided that the right thing to do would be to have the center of the cone be halfway up the central axis.  I'm not quite sure what drove this choice - in fact, I couldn't even confirm that it was true - but it means that we have to move both cones and invert the top one.

These "instructions" are passed in to the method as arguments.  yd is the "y displacement", i.e. the amount we want to move the cone up or down the y-axis.  The rot is a rotation (in radians) we want to apply in the XY plane (i.e. about the z-axis).

Every O3D model (including Cone here) exposes both position and rotation as properties you can adjust.  However, adjusting them is not, of itself, enough: you then need to tell the object that you have adjusted the properties by calling the update method.

Since we are only interested (at the moment) in moving the cone up and down the y-axis, we only adjust the position.y value.

The specification of the rotation boggles my mind, but basically it consists of three values: the amount to rotate about each axis.  In this case, we want a fairly simple rotation: 180º (π radians) about the z-axis which is easy enough to specify, but I feel that there must be some more consistent way of specifying this which I haven't seen yet.  Hopefully we will get back to it later in the exercise.

Building the Scene

With the ability to create cones in place, we can now construct the scene:
var top = cone(25, Math.PI);
var bottom = cone(-25, 0);

app.scene.add(top);
app.scene.add(bottom);

draw();
This is very simple: we create the two cones, add them to the scene and then draw the scene.

Drawing the Scene

The guts of the draw() method is probably where I'm weakest in terms of understanding what I did.  I mainly just copied the code that we looked at in the example.
gl.viewport(0, 0, app.canvas.width, app.canvas.height);
gl.clearColor(0.7, 0.7, 0.7, 1);
gl.clearDepth(1);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

app.scene.render();
As with the example, I have created a variable gl which is just an alias for app.gl.

The first line defines where the output is going to go and I have stuck with the assumption that we want to use the entire canvas.  The second line specifies the clear (or background) color.  I have chosen a "light" gray, but to the human eye it is more of a neutral gray.  Skipping over the lines I really don't understand, the penultimate line actually clears the canvas and then the final line renders the entire scene to the canvas.

Texture

Finally, let's talk about the texture.  I'd said before that I usually recommend (conical) party hats as the way to go in understanding conic sections, so I thought I'd carry on that theme by putting some balloons on the cones.

I kept - and keep - on getting an error about "there not being any texture bound".  Googling suggests this has something to do with the texture not being loaded, which makes sense, except it happens so much that it doesn't seem entirely plausible.  Anyway, I have just stopped paying any attention to it.

The basic setup is to declare a texture in the textures argument of the constructor.  There appear to be a lot of options you can specify, but I didn't understand any of them and nothing I played with seemed to make my life any better or worse.  So I ended up with the simplest thing that could possibly work:
textures: {
  src: ['img/balloons.png']
},
The image I downloaded off the internet was a JPEG, but it would seem that JPEGs are not supported.  This is based off the fact that I received a bizarre error message when I tried it.  I converted it to a PNG only that didn't work either; this time I received an error message about powers of two.  I then shrank my image to exactly 512x512 and saved that as PNG and that works.  I don't know the details.

It would seem that the src parameter becomes the id of the texture.  Anyway, as noted above, when creating the cone, we simply specify the value of the texture's src string as the texture we want to use:
var ret = new PhiloGL.O3D.Cone({
  ...
  textures: ['img/balloons.png']
});

Summary

Having understood the basic mechanics of building WebGL applications, I made a plan to build a conic section visualizer and started the process by building and rendering a simple scene consisting of two party hats.

Getting Started with 3D Graphics in Javascript

I have a number of projects on the go that feel like they want 3D graphics.  This is not something I've ever done before, but I'm aware of it in my outer consciousness.  So I thought I'd give it a go.

For full disclosure, I've been aware for quite a while that it's possible to do cool 3D graphics on the Web.  For example, some of my ex-colleagues work at AGI, where they track Santa's progress every Christmas for NORAD.

At the moment, I'm thinking a lot about data visualization and the interconnectedness of all things.  I've thought about this in the past, but I've always drawn 2D models on desktops.  This time I want to have a go at 3D on the Web.

At the same time, I'm thinking about presentations and wondering if I can make a "cooler" presentation tool than, say, Keynote (cooler than Powerpoint or Google Slides would not be hard) and in this I'm influenced to a degree by Prezi, which is probably the most alternative thing that I've seen.

I may or may not eventually do either or both of those things, but in the meantime I want to develop the capability.  So here, on this blog, I am going to experiment with a library called PhiloGL which I'm hoping will be fit for my purposes and, specifically I am going to start by blatantly stealing their "Example 11".

If I ever do get around to writing a cool presentation tool, I may blog about it here, but it will live somewhere else in its own open-source project.

Let's Get Started!

I love starting with other people's examples - particularly if they have already solved the problem I'm trying to tackle.  That means that I can make as many mistakes as I want and always go back to working code.

So I'm going to download the core package and unzip it into my blog repository.  However, I'm not going to check it in, so if you are trying to follow along with this post, you will probably want to do this (I'll copy the relevant things into my directories when building my own code in later lessons):

mkdir PhiloGLSrc
cd PhiloGLSrc
curl -o - http://www.senchalabs.org/philogl/downloads/PhiloGL-1.5.2.zip | tar xvfz -

So it would be possible to just open the relevant HTML file, but since there are often situations where there are content issues browsing files, it is generally easier to browse an "actual" website, I tend to start up a local webserver just to serve the static content.

Depending on your environment, there are many different options.  For my money, python's SimpleHttpServer is generally easiest:

python -m SimpleHTTPServer 8080

Now we can browse to http://localhost:8080/examples/lessons/11/index.html and we should have a scrollable, rotatable, zoomable map of the moon.  If you look at the bottom, you can also see options as to how the moon is lit (direction of light source, color, etc).

Great!  It works.  But how?

Unpacking Sample11

Conveniently, the JavaScript code for Sample11 is shown right next to the moon picture, so it's easy to follow what's going on.  Even so, I'm going to rabbit through my thoughts as we look at it.

But first, I'm going to pick apart the things that aren't shown: specifically the HTML.

This is the outline of the HTML (if you want to see the whole thing, View Source):
<html>
<head>
<link href="../lessons.css" type="text/css"
      rel="stylesheet" media="screen" />
<script type="text/javascript" src="../../../build/PhiloGL.js"></script>
<script type="text/javascript" src="index.js"></script>
</head>

<body onload="webGLStart();">
  <canvas id="lesson11-canvas" style="border: none;"
          width="500" height="500"></canvas>

  ...
  <input type="checkbox" id="lighting" checked />Use lighting
  ...
  X: <input type="text" id="lightDirectionX" value="-1.0" />
  Y: <input type="text" id="lightDirectionY" value="-1.0" />
  Z: <input type="text" id="lightDirectionZ" value="-1.0" />
  ...
  R: <input type="text" id="directionalR" value="0.8" />
  G: <input type="text" id="directionalG" value="0.8" />
  B: <input type="text" id="directionalB" value="0.8" />
  ...
  R: <input type="text" id="ambientR" value="0.2" />
  G: <input type="text" id="ambientG" value="0.2" />
  B: <input type="text" id="ambientB" value="0.2" />
  ...
  </body>
</html>
The link tag loads in the generic CSS for the examples.

The first script tag loads in the actual PhiloGL library from the "build" directory.

The second script tag loads in the example code as displayed in the text box in the browser window.

Note the onload attribute on the body: this it what causes the WebGL code to be initialized once all of the HTML has been loaded.  It is important to do things in this order because only then can you be guaranteed that all the elements will be loaded and initialized before the Javascript attempts to reference them.

The canvas tag identifies an area of the screen into which the image will be rendered.

The remaining items are the various inputs to control the lighting.

The JavaScript

Now let's unpack that JavaScript that's on the screen.  Starting at the very outside, it is just one function - webGLStart() - which is the one that is specified in the onload tag of the body.  It does all of the initialization and setup and then everything should run its course.

Inside this, there are three declarations and then the library initialization call:
var $id = function(d) { return document.getElementById(d); };
var moon = new PhiloGL.O3D.Sphere({ ... });
PhiloGL('lesson11-canvas', { ... });
The first of these is a very simple abbreviation for getElementById().  The second creates a model of the moon by creating a sphere and then coating it with a map of the moon.  The third one calls the central PhiloGL initializer, telling it to use the canvas we defined in the HTML and then providing it with a string of options that we will look at next.

Constructor Options

This is the next layer of the onion in the initializer call (see also the official documentation):
{
  camera: { ... },
  textures: { ... },
  events: { ... },
  onError: function() { ... },
  onLoad: function(app) { ... }
}
There appear to be three basic parts to the options:
  • Defining how the scene is interpreted and rendered
  • Describing how the application will interact with user events
  • Lifecycle events (onError and onLoad)
In a little bit we are going to look at how the scene is built up (spoiler: it's the moon) but 3D renderings require more than just the scene.

One important question is "where is the observer?"  In a 3D model, it is possible for the observer to stand anywhere in 3D space and to be looking in any direction.  It is also possible to adjust the "field of view" (commonly called the telephoto effect or "zooming in"), the aspect ratio (the ratio of width to height of the projecting screen, i.e. canvas) and the range that can be seen (objects too close will be ignored and objects too far away are invisible).  This is determined by the camera.

Another key question is "where is the light coming from?" In order for us to see anything, light must be coming from somewhere, be a particular color, strength and be travelling in a particular direction.  All of this must be described - at least to some extent.

The textures provide a means for having a layer of abstraction between the objects in the scene and specific images.

Because this canvas is its own self-contained entity, user events are described here in the context of the initializer.  However, as far as I can see, they are perfectly ordinary events and perfectly ordinary event handlers.

The onError and onLoad methods form a pair that handle the outcome of the initialization.  If the initialization is successful, then onLoad is called and given a handle to the application that was created during initialization.  If the initialization failed for any reason, onError is called instead.

The onLoad method

The first dozen or so lines of the onLoad method simply gather resources together.  The first five are simply aliases for fields within the provided application object.  The remaining three definitions gather up fields from the HTML to determine the nature of the scene's lighting.

The next block - as the comment indicates - does basic setup of the GL context.  I could guess at what it does but I'm sure you can too.

Finally we create the scene by adding the moon we created at the top level to the scene which was passed in as part of the initialized application object.  We then call the draw() function below.

The draw method

Finally we get to the actual drawing.  This looks complex but it basically comes down to:
  • Clear the canvas
  • Set up the lighting based on what settings are current in the HTML input elements
  • Render the scene
  • Ask for draw() to be called every time the animation clock ticks
And that's pretty much it.  Next we'll have to think of something more interesting to do.

Articles in this series

Summary

This WebGL stuff looks pretty easy - as long as somebody else is providing the artwork, setting up the scene and lighting, and writing the code.  It also looks pretty cool.