Introduction
WebGL
High performance 3D graphics in the browser are enabled by WebGL
https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API
WebGL is the browser’s interface for OpenGL allowing the browser access to the device’s graphic processor unit (GPU) for accelerated rendering. The rendering is placed in the web page HTML <canvas /> element using the “webgl” context.. The Mozilla Developer Network (MDN) has an excellent tutorial.
https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL
If you read the tutorial, you quickly learn that the webgl context allows writing “shaders” in the OpenGL ES Shading Language (GLSL). GLSL is a very low level programming language for the GPU, and is not a language that a web developer should use. Fortunately, there are several good frameworks for writing 3D applications, Three.js and Babylon.js.
Three.js
We will use Three.js for our development.
There are several very good for Three.js tutorials:
https://threejsfundamentals.org/ <- read
https://discoverthreejs.com/ <- do
This lecture will not repeat the content of these tutorials. You should read the “ThreeJS Fundamentals” tutorial and work through the “Discover ThreeJS” tutorial. This lecture assumes that you have worked through the “Discover ThreeJS” tutorial, and builds on the tutorial’s code. I will only review some of the basic components that every ThreeJS application shares, in fact any 3D application.
Fundamental Components
Every ThreeJS application will use a “WebGLRenderer” to render the “Scene”.
- https://threejs.org/docs/index.html#api/en/renderers/WebGLRenderer
- https://threejs.org/docs/index.html#api/en/scenes/Scene
The Scene is where a hierarchy of 3D objects are placed, “Object3D”, composing and displaying the view.
https://threejs.org/docs/index.html#api/en/core/Object3D
Object3D is the base class for “Meshes” (and other components).
https://threejs.org/docs/index.html#api/en/objects/Mesh
A mesh is composed of a “geometry”, which describes the shape of the object, and a “material”, which describes the surface (e.g. the color and how light reflects from the surface). The typical base class for the geometry is “BufferGeometry”.
https://threejs.org/docs/index.html#api/en/core/BufferGeometry
There are some primitive geometries for constructing 3D objects, for example
- BoxGeometry: https://threejs.org/docs/index.html#api/en/geometries/BoxGeometry
- CircleGeometry: https://threejs.org/docs/index.html#api/en/geometries/CircleGeometry
- ConeGeometry: https://threejs.org/docs/index.html#api/en/geometries/ConeGeometry
- PlaneGeometry: https://threejs.org/docs/index.html#api/en/geometries/PlaneGeometry
- SphereGeometry: https://threejs.org/docs/index.html#api/en/geometries/SphereGeometry
“Material” is the abstract class for all materials.
https://threejs.org/docs/index.html#api/en/materials/Material
There are many material classes that you can use, for example:
- LineBasicMaterial: https://threejs.org/docs/index.html#api/en/materials/LineBasicMaterial
- MeshBasicMaterial: https://threejs.org/docs/index.html#api/en/materials/MeshBasicMaterial
- MeshPhongMaterial: https://threejs.org/docs/index.html#api/en/materials/MeshPhongMaterial
- MeshPhysicalMaterial: https://threejs.org/docs/index.html#api/en/materials/MeshPhysicalMaterial
A scene is viewed from a “Camera” and illuminated with a “Light”.
- https://threejs.org/docs/index.html#api/en/cameras/Camera
- https://threejs.org/docs/index.html#api/en/lights/Light
There are several different kinds of cameras, but you will probably use the “PerspectiveCamera”.
https://threejs.org/docs/index.html#api/en/cameras/PerspectiveCamera
Lights are typically used in combination, for example:
- DirectionalLight: https://threejs.org/docs/index.html#api/en/lights/DirectionalLight
- AmbientLight: https://threejs.org/docs/index.html#api/en/lights/AmbientLight
- HemisphereLight: https://threejs.org/docs/index.html#api/en/lights/HemisphereLight
In summary, most 3D applications will have:
- Renderer
- Scene, composed of Object3D:
- Mesh, composed of geometries and materials
- Camera
- Light
Details
Object3D Composition
The scene is composed by adding objects.
https://threejs.org/docs/index.html#api/en/core/Object3D.add
Note that the Scene and Mesh are Object3D. So complex 3D objects are created by hierarchically adding basic/simpler Object3Ds, or Groups.
https://threejs.org/docs/index.html#api/en/objects/Group
For example a snowman would be a Mesh (or a Group) composed using three meshes using the SphereGeometry with increasing radius on top of each other.
Each geometry uses a “local” coordinate system to describe itself, and after adding it to the Group is located in the Group’s coordinate system.
Coordinate System and Transformations
Understanding the coordinate system is critical for correct object composition, and camera and light positioning. The standard coordinate system is slightly different from the screen coordinate system.
- Y axis is vertical and positive is up (Note in the screen the Y axis is positive down)
- X axis is horizontal and positive right (This is same as the screen X axis)
- Z axis is horizontal and the camera’s default orientation is facing negative Z.
It is a convention that units are in meters. This is only a convention.
The world coordinate system is the scene coordinate system. Object3D has methods for retrieving the position of a point (Vector3) in either the local or world coordinates.
Objects are translated, rotated and sized (even stretched) using affine transformation, a matrix with rank 4.
https://en.wikipedia.org/wiki/Affine_transformation
But Object3D has simpler methods for translating, rotating and sizing along a single axis. If you have studied Physics, you have learned that adding rotations is not associative, i.e. the order of rotation is important, But any rotation is described precisely by querterions.
https://en.wikipedia.org/wiki/Quaternion
Object3D has methods for rotating about individual axes. So hopefully you will not have to use advanced mathematical concepts.
Project Design
Our example project is to display “paths” of drone flights through a “space” or a room, so the drone flights can be analyzed. The scientist provides a model of space made from a LIDAR point cloud and the flight paths from the text output generated by a Unity simulation.
Making and Loading 3D Objects
So our example application should load the model into the scene (in glb format) from the file system. A typical interaction design is that the user clicks a button in the browser, the browser opens a file explorer window, and the user selects the glb file. The application can use the GLTFLoader.
https://threejs.org/docs/index.html#examples/en/loaders/GLTFLoader
Likewise, paths are added to the scene by the user clicking and then selecting the text file describing the flight path. But now the application needs to parse the file, create a JavaScript (JS) object representing the path, and use the JS object to make the BufferGeometry, LineBasicMaterial and Line mesh:
- https://threejs.org/docs/index.html#manual/en/introduction/Drawing-lines
- https://threejs.org/docs/index.html#api/en/materials/LineBasicMaterial
- https://threejs.org/docs/index.html#api/en/objects/Line
Locating the Path
After the path and space are loaded, the user will need to inspect the scene by moving the camera with OrbitControls.
https://threejs.org/docs/index.html#examples/en/controls/OrbitControls
See the example of OrbitControls:
https://threejs.org/examples/#misc_controls_orbit
Most likely the user will need to locate the path precisely in the space. The application can use DragControls.
https://threejs.org/docs/index.html#examples/en/controls/DragControls
See the example use of DragControls:
https://threejs.org/examples/#misc_controls_drag
Although the user interactions of OrbitControls and DragControls are individually natural, getting them to work together is not straightforward.
Analysis
There are several analyses that can be made after the space and path are loaded into the scene. For example:
- The path could be colored to indicate the speed during the path
- The path can show the orientation of the drone along the path
There are also other tools that could be added to the application to make visualizing the path easier.
- There can be multiple fixed camera view
- There can be views that bisect the space and path
This example project only demonstrates loading and locating the path. It will not code the above analyses.They are left for you.
Project Code
Installing and Running
The project code is at:
https://github.com/2021-SD-UI/drone_flight
Clone the repository into a directory on your machine. To run the code on your development machine, you will need Node and the Node Package Manager (npm). To install Node and npm use version manager.
https://docs.npmjs.com/cli/v7/configuring-npm/install
I use nvm-windows on my home machine.
To learn more about Node and npm see:
- https://nodejs.org/en/about/
- https://nodejs.org/en/docs/guides/
- https://docs.npmjs.com/cli/v7/using-npm
- https://docs.npmjs.com/cli/v7/commands
After installing npm, you install the packages, by entering
npm install
This will make a node_module directory and load webpack and three.js packages.
To run the project locally, enter:
npm start
This will package a JavaScript bundle, open a browser window and load the page.
Project Directory Structure
The project directory structure is very similar to the structure in the “Discover Three JS” tutorials, but modified for development of larger projects. At the top level there are development files, and public and src directories.
project/ .git/ node_modules/ public/ src/ package-lock.json package.json webpack.configure.json
Development Files
The files package.json and web.configure.json are development files.They specify the development environment.
package.json
NPM uses package.json to know what module/libraries to install. If you open package.json in your editore, you can identify the “devDependencies” and “dependencies” entries. The “dependencies” entry lists the modules used by the application. In this case there is only “three” for the Three.js library. The “devDependencies” entry lists modules used during development. They are not part of the application or JS code that is loaded into the page. The development dependencies are:
- webpack
- webpack-cli
- web-dev-server
These are tools provided by webpack
Webpack
Webpack’s primary function is to bundle JS code for the web application.
Webpack uses the “import” statement to build the dependency graph and construct the JS bundles.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import
You will see many examples of the import statements in the code.
The file webpack.configure.json configures the entry point, output and rules for the modules. To learn more about webpack configuration, read
https://webpack.js.org/concepts/
The first section, “concepts”, should be sufficient to give an overview of webpack and configuring webpack.
For development, we will use the “webpack development server” and “hot modules replacement”.
- https://webpack.js.org/guides/development/#using-webpack-dev-server
- https://webpack.js.org/concepts/hot-module-replacement/
- https://webpack.js.org/guides/hot-module-replacement/
The “webpack development server” is useful for development. It runs a small web server on your local machine. This is necessary for loading external files such as the model and text files into the browser. The “hot module replacement” is useful for development. Whenever you save a project file, it will repackage the JS bundle and reload the webpage. This makes development and incremental programming easy.
Web App Files Structure
Top Level Structure
The web application files are split between the two directories:
project directory/ public/ assets/ styles/ index.html src/ util/ World/ index.js
The public/ directory contains all the files that the client can see. The src/ directory contains all the files that will be packaged by webpack.
public/index.html
The HTML file index.html is the webpage. Open the file in your editor. The head:
<head> <title>Drone Flight </title> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta charset="UTF-8" /> <link href="./styles/index.css" rel="stylesheet" type="text/css"> <script type="module" src="./bundle.js"></script> </head>
The tags to pay attention to are the <link /> which links the css file, and
<script type="module" src="./bundle.js"></script>
Is the output location for webpack bundle.
The body:
<body> <div class="top"> <h1>Drone Flight</h1> <div class="inline"> <label for="space_upload">Choose space (glb, gltf)</label> <input type="file" id="space_upload" name="model_uploads" accept=".glb, .gltf" > </div> <div class="inline"> <label for="path_uploads">Choose path (txt)</label> <input type="file" id="path_uploads" name="text_uploads" accept=".txt" > </div> </div> <div id="scene-container"> <!-- Our <canvas> will be inserted here --> </div> </body>
Is split into two main divs:
<div class="top">
<div id="scene-container">
The “top” div places the title above the scene. Open the styles/index.css. To see how this is done. The “top” rule:
.top { position: absolute; /* Place at the top by removing from the normal flow */ width: 100%; /* make sure that the heading is drawn on top */ z-index: 1; }
positions the div at the top and gives it z-index so that it appears above the scene. Read about CSS layout in the MDN documentation.
https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout
Reading the introduction about “Normal Flow” and the “position” property should be sufficient
- https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Introduction
- https://developer.mozilla.org/en-US/docs/Web/CSS/position
The “inline” class for the <label > and <input /> tags is for positioning them below the <h1 > element and adjacent to each other.
The “scene-container” is where the Three.js will place the scene.
src/index.js
The index.js file is the entry point for webpack. It should look very familiar if you have worked through the “Discover Three JS” tutorial. It was called main.js in the “Discover Three JS” tutorial. Briefly it locates the elements in html for the scene and labels to the file input. Then it constructor the world, initializes all the asynchronous calls and starts the world.
src/index.js
The index.js file is the entry point for webpack. It should look very familiar if you have worked through the “Discover Three JS” tutorial. It was called main.js in the “Discover Three JS” tutorial. Briefly it locates the elements in html for the scene and labels to the file input. Then it constructor the world, initializes all the asynchronous calls and starts the world.
src/ Structure
The src/ directory is split into directories, util/ and World/.
src/ util/ MakeURL/makeURL.js UploadTextFile/makeTextFileUploader World/ components/ path/ space/ camera.js lights.js scene.js utilities.js system/ dragControls/ orbitControls/ Loop.js rendere.js Resizer.js constants.js World.js index.js
The util directory/ contains generic JS code for creating a URL object from a file, and uploading and reading a text file. We will discuss these files in detail shortly. The World/ directory structure should look very similar to the “Discover Three JS” tutorial. The components/ directory contains all the Object3Ds that are added to the scene, and the systems/ directory contains all the components that controls and renders the scene.
Web App Functional Coding
Load Space
Interaction and Implementation Design
The interaction for uploading the space is the user clicks on a button, the browser opens a file explorer window. From the file explorer, the user selects a gltf file and clicks “open”. The app then loads the space into the scene. The last chapter of the “Discover Three JS” tutorial explains how to use the GLTFLoader.loadAsync(…) to load the model into the scene.
https://discoverthreejs.com/book/first-steps/load-models/
Loader is the base class for GLTFLoader and defines loadAsync(…).
https://threejs.org/docs/#api/en/loaders/Loader.loadAsync
The loadAsync method only requires a URL, a string. Searching the MDN documentation, we discover that URL.createObjectURL is used to create a URL from a File.
https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL
The MDN documentation for the <input> HTML element provides example code using the element to select a File and make a URL to link the image into the page.
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/file#examples
Also see
The essence of the example is to add an event listener to the <input> HTML element listening to the ‘change’ event.
input.addEventListener('change', updateImageDisplay);
The first argument, ‘change’, is the event to listen for and the second argument, updateImageDisplay, is the callback. The callback is a function that is evoked after the ‘change’ event fires. Adding event listeners with callbacks is the standard technique for JavaScript to handle user interactions.
The callback, updateImageDisplay, makes safety checks and DOM manipulations. The DOM (Document Object Model) is the representation of the HTML page in the JavaScript code.
https://developer.mozilla.org/en-US/docs/Web/API/Document_object_model
Finally updateImageDisplay adds the image to the DOM using
image.src = URL.createObjectURL(file);
We can borrow much of the example code to make our “space loader”.
Inspect Project Code
The natural location to add event listeners is in the World constructor.
import { loadInitSpace, addSpaceListener } from './components/space/space.js'; // … constructor(container, inputSpace, inputPath) { // … // Add event listeners addSpaceListener( inputSpace, this ); }
So we can find addSpaceListener in components/space/space.js file.
function addSpaceListener ( input, world ) { const makeSpace = makeSpaceCallback( world ); const uploadSpace = makeURL( input, makeSpace ); input.addEventListener( 'change', uploadSpace ); }
So the addSpaceListener function constructs the callback, uploadSpace, using makeSpace, and adds the ‘change’ event listener.
Let us inspect makeURL in src/util/makeURL/makeURL.js I’ll not display the code here; it is rather simple generic code for making an URL from a File upload. After a simple safety check, the code uses URL.createObjectURL to make the url and then call the callback with the url.
There is a new aspect to notice in this code. This genetic code only knows about url and so can only make the callback call using only url. This is tricky for us because ideally we would want the callback to know about the world, so it can add the space to the world and the scene. How can we fix this? We could add a world argument to makeURL function and pass it into callback, but then makeURL would no longer be generic and reusable. JavaScript has another way.
Back in World/components/space.js, look at makeSpaceCallback function:
function makeSpaceCallback ( world ){ // world is now in the closure of makeSpaceCallback return async ( url) = { // if space exist remove it from the scene if ( world.space ) world.scene.remove(world.space); const space = await loadSpace( url, world ); } // end returned function } // end makeSpaceCallback
This code is strange. The makeSpaceCallback is called with the world argument, but the world is used in the returned function without passing it as an argument. Below I simplify the code to make my point clear.
function makeSpaceCallback( world ){ return () => { // uses world } }
This code is using the JavaScript concept of “closure“.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures
The closure is the context that a function runs in, i.e. the variables that it knows about. Each function creates a closure. The makeSpaceCallback creates a closure containing the world variable for the returned function. This is a standard JavaScript technique for extending the reach of a function. It is very much like a class extending another class.
Now that we understand the basic structure of makeSpaceCallback, we can inspect the loadSpace function which does most of the work.
async function loadSpace( url, world ){ const loader = new GLTFLoader(); const [ spaceData ] = await Promise.all([ loader.loadAsync(url), ]); const space = setupSpaceModel(spaceData); space.applyMatrix( new Matrix4().makeScale( 1, 1, -1 )); // Flip Z-axis world.space = space; world.scene.add( space ); return space; }
Most of the code should be familiar from the “Discover Three JS” tutorial. It is an async function and uses GLTFLoader.loadAsync to load the model and then add the mesh to the world’s scene. There is only one strange line.
space.applyMatrix( new Matrix4().makeScale( 1, 1, -1 )); // Flip Z-axis
https://threejs.org/docs/#api/en/core/Object3D.applyMatrix4
The applyMatrix uses a matrix (of rank 4) to make affine transformation. In this case the affine transformation flips the Z-axis. During development, Ricardo (the scientist for the Drone Flight project) and I noticed that to get the LIDAR model to agree with the Unity model, we had to flip the Z axis of the LIDAR model.
Review
Let us review the process for a user to upload the space model beginning with loading the page, and we can also pick up some loose ends.
The browser makes the DOM from the index.html which positions the <label> and <input> HTML elements in the top div. The styling in /styles/index.css makes the <label> look like a button and colors it for hover and active states. The <input> is hidden by making it 1px by 1px. I tried hiding the <input> by changing opacity as the MDN example does, but the tag still consumes space on the page and another button cannot be adjacent to the first button.
The user clicks on the button and selects a file. This fires the “change” event and makeURL creates the url using URL.cerateObjectURL and calls the callback, makeSpaceCallback. The makeSpaceCallback creates the closure so the returned function has access to the world. It checks if space already exists if so it removes it from the scene. We only want one space at a time in the scene. It then calls loadSpace which waits on GLTFLoader.loadAsync to load the URL/model, it calls setupSpaceModel to extract the mesh then it flips the Z axis and adds space to the world and the scene.
Load Paths
Interaction and Implementation Design
The user interaction for uploading a path is very similar to uploading the space, but there are some subtle differences in the implementation. After the user selects the file, the app needs to read the file, parse it, and make a JavaScript object and Mesh.
Uploaded files are read in JavaScript using FileReader and FileReader.readAsText() method.
- https://developer.mozilla.org/en-US/docs/Web/API/FileReader
- https://developer.mozilla.org/en-US/docs/Web/API/FileReader/readAsText
FileReader.readAsText() method is an asynchronous process, so the JS needs to set the callback for what it should do after reading. FileReader fires ‘load’ even after loading the file, and the eventhandler is set in FileReader.onload.
https://developer.mozilla.org/en-US/docs/Web/API/FileReader/onload
Inspect Project Code
Setting the event listener for flight path file input is initialized in the World constructor
addPathListener( inputPath, this.scene, dragControls, pathJSObjs );
which is imported from
import { loadInitPath, addPathListener } from './components/path/path';
Open components/path/path.js in your editor, and study addPathListener function:
function addPathListener ( input, scene, dragControls, pathJSObjs ){ const makePath = loadPath( scene, dragControls, pathJSObjs ) // using the partial application const uploadPath = makeTextFileUploader( input, makePath ); // Make callback for EventListener input.addEventListener( 'change', uploadPath ); }
This code is very similar to addSpaceListener in components/space/space.js. A callback function, uploadPath, is made and then set to respond to the ‘change’ event. The function makeTextFileUploader is used to make the callback function which is imported from src/util/uploadTextFile/makeTextFileUploader.js. After some file verification, makeTextFileUploader sets the ‘load’ event handler and then calls readAsText.
// The work on the file is done by the callback for onload // need to set reader onload callback before reading reader.onload = function (event) { const result = event.target.result; callback(result); } // end onload // read file after setting up callback reader.readAsText(file);
Note that best practice is to set the ‘load’ eventhandler before calling the readAsText. If the eventhandler is set after readAsText is called there is the possibility that the ‘load’ event will fire before the eventhandler is set.
In this case, the ‘load’ eventhandler gets the result from event.target.result and calls the callback.
Like makeURL, this code is very generic and can be reused. It does not know anything about the world.
In World/components/path/path.js, inspect the loadPath function which does the work of parsing the file, making the JS object and Mesh, and finally adding it to the scene.
// A curry/partial application function see // https://medium.com/javascript-scene/curry-and-function-composition-2c208d774983 const loadPath = ( scene, dragControls, pathJSObjs ) => pathString => { // make path JS object and add to pathJSObjects const pathObject = makePathObject( pathString ); pathJSObjs.push( pathObject ) // make path mesh // Using composition const pathMesh = makePathMesh( makePathGeometry( pathObject ) ); // Another way to flip the path geometry // pathMesh.applyMatrix( new Matrix4().makeScale( -1, 1, 1 )); // Flip X-axis // I prefer flipping the JS object rather than the Mesh // Add path to the scene scene.add( pathMesh ); // Add path to draggables const draggables = dragControls.getObjects(); draggables.push( pathMesh ); }
The declaration for loadPath is strange:
const loadPath = ( scene, dragControls, pathJSObjs ) => pathString => { // ... }
It uses the JavaScript E6 fat arrow syntax for declaring a function.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions
The fat arrow syntax typically has the structure
const ftn = ( a, b ) => { // … do something return something }
which could be written in the older JS syntax as
const ftn = function( a, b) { // … do something return something }
If the there is only one argument and the function body is a single line returning the result of the line, the syntax can be shorten
const shortFtn = a => // do something in a single line and return result
which could be written in the older JS syntax as
const shortFtn = function( a ) { return // do something in a single line }
Although the two syntax produce similar results, functions, they are not identical. Read about the differences and limitations of “arrow function expression” in the documentation. Nevertheless for the time being, we can imagine them as the same.
We inspect the declaration for loadPath, it is a sequence of fat arrows and could be written as
const loadPath = function ( scene, dragControls, pathJSObjs ) { return function( pathString ) { // do something } }
So the loadPath is a function returning a function. This is very similar to the combined effort of makeSpaceCallback and loadSpace functions in World/components/space/space.js. It is used for the same purpose to create a closure for the returned function. In the case of loadPath, scene, dragControls and pathJSObjs are in the closure for the returned function, the callback for makeTextFileUploader.js.
Note that you invoke the the first returned function with
const returnedFunction = loadPath( scene, dragControls, pathJSObjs )
which will return a function using pathString. You can also invoke both functions simultaneously.
const result = loadPath( scene, dragControls, pathJSObjs )( pathString )
I admit the combined fat arrow syntax can be obscure. When I first learn of the syntax from “JavaScript Allonge” and “Eloquent JavaScript”:
I found it difficult to parse. But parsing eventually comes naturally. The advantage of using the combined fat arrow is that it makes explicit that the function being is a curried function or partial application.
https://medium.com/javascript-scene/curry-and-function-composition-2c208d774983
Currying is a standard functional technique building functions of several arguments from a sequence of single arguments functions. You may think, “Why bother with currying?”. But a reason for using currying is to create a closure for the returned function which permits writing generic functions, such makeTextFileUploader and makeURL.
Now that we understand the general structure of loadPath, let us inspect what it does. First it makes the JS object by calling makePathObject by parsing the pathString. Open makePathObject.js. The code is too long for me to reproduce in this lecture. Its length is because it is very repetitive. Basically it uses regular expressions to parse the string. I hope that you’re vaguely familiar with regular expressions. If not, my favorite references are
- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp
- https://www.regular-expressions.info/reference.html
I’ll just point out some of the details. RegExp constructor permits you to add regular expression patterns which is convenient for repeated patterns and for making the complete pattern more readable by breaking up the pattern.
The pattern dRe is repeated.
const dRe = /(-?\d+\.\d+)/;
The pair of parentheses, “(…)”, captures the pattern. These are values that the code uses to make the JS object for the path. The pattern has an optional minus sign using the conditional match, “-?”. Then it matches one or more digits, followed by a period, “\.”, and one or more digits. Note that “\d” is the notation for the digit class, “[0-9]”.
Jump to the main code at the bottom of the file. The line
const returnObj = filteredLines.map( parseAndMakeObj );
The array filteredLines are lines from pathString that have matches. The method Array.map invokes the utility function, parseAndMakeObj, on each line and constructs a new array.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map
The utility function, parseAndMakeObj, find all the matches of the line:
const match = line.match(parsingRE);
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match
The string method match returns an array of matches. The first array element is always the string being matched. That is why the code extracting the matches beginning with index equal to 1.
The rest of the code should be obvious. One detail to note is that the x components of position and speed and pitch are flipped. Ricardo and I noticed that in order for the path to match the space, we had to flip the x coordinate. The code could flipt the positions component using Object3D.applyMatrix4, but that would not flip the speed and orientation coordinates, so I prefer to flip them all while making the JS object.
The returned object has the structure
{ position: { x, y, z } speed: {x, y, z } orientation: { pitch, yaw, roll } time }
Mapping of filteredLines with parseAndMakeObj will be an array of these objects.
Back to the loadPath function in path.js. After parsing, the array of objects is added to pathJSObjs.
pathJSObjs.push( pathObject )
The array pathJSObjs is a World property. It is saved in the World class so that it can be used later for analysis.
The path Mesh is made using function composition
const pathMesh = makePathMesh( makePathGeometry( pathObject ) );
The function, makePathGeometry in path/makePathGeometry.js, creates an array of Vector3 arrays using Array.map again and then using BufferGeometry.setFromPoints to make the geometry. See for reference:
https://threejs.org/docs/index.html#manual/en/introduction/Drawing-lines
The function defined in makePathMesh.js is simple but there are some subtlety that should be explained. First, the color for the line is extracted from an array of colors defined in World/constants.js.
const color = PATH_COLORS[0];
The application should be able to upload more than one path for analysis, so each path should get a different color. The list of colors to use are kept in PATH_COLORS array. Additional code is required to get this feature fully implemented. It should use modular division on pathJSObjs.length to extract the color.
Second, the Mesh Object3D, line, is given a name.
line.name = 0; // name can be used to identify the line.
https://threejs.org/docs/index.html#api/en/core/Object3D.name
The name should be the index of the path in pathJSObjs. This is so the application can identify the path that is selected by the user and access the path JS object for analysis. Additional code is required for this feature to be functional. When a new path is uploaded the name should be “pathJSObjs.length–“.
You should implement these features. It is an opportunity for you to learn the code. The function makePathMesh will need pathJSObjs, but makePathGeometry does not, so try to modify the code without editing makePathGeometry.
One more feature to inspect about uploading the path. During development, sometimes it is convenient to have the path uploaded during page load. The “Discover Three JS” tutorial demonstrated how to do this in World.init(…) to load a gltf file. We should implement this feature for a flight path. To read a text file in a web app, the app needs to fetch the file.
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
Fetching uses the web server to serve the file to the client, browser. The browser makes a request and returns the text string in the response. Both receiving the response and extracting the response text are done asynchronously in JS. The fetch returns promises, so the code can use async and await keywords.
inspect loadInitPath in path/path.js. The response returned by fetch:
const response = await fetch( pathURL );
Note the await before the fetch. The Response object has many properties and methods.
https://developer.mozilla.org/en-US/docs/Web/API/Response
Note the check for the non-successful fetch.
if (!response.ok) { throw new Error(`HTTP error! status: ${response.status}`); }
If the response is OK then the code can extract the text.
const pathString = await response.text();
Again note the await before response.text(). The method text() is asynchronous.
Finally the code makes the path Mesh by calling the complete loadPath.
const pathMesh = loadPath( scene, dragControls, pathJSObjs )( pathString );
Review
Let us review the process for a user uploading a path from when the page loads. The DOM is constructed from HTML and the <label > and <input > are styled by the rules specified in index.css.
The user clicks the <label > HTML element and selects a file. The event ‘change’ fires and makeTextFileUploader.js runs. The code of makeTextFileUploader makes some validity checks and defines the FieReader.onload event listener. The code creates the JS string of the text file using FileReader.readAsText( file ). The onload eventhandler callback, loadPath is called. The callback, loadPath, was written as a partial application or curried function using the combined fat arrow syntax. The return function of loadPath paress the text file and makes an array of JS objects. During the process of making the JS objects, it flips the X axis. The loadPath code then pushes the arrays onto the world’s pathJSObjs so that analysis can be performed on it later. The path Mesh is constructed. During the construction, it is given a color and name. The Object3D.name can be used to associate the pathMesh with the JS object in World.pathJSObjs when the mesh is selected by the user. Finlay, the pathMesh is added to the scene and draggable objects of the DragControls (which will be explained in the next section).
You have an assignment to generalize the code so that multiple paths can be added to the scene, each with a different color and properly naming its index in pathJSObjs.
Loading a path during development is handy, but the process is slightly different. The JS code must request the file from the web server using the asynchronous function, fetch. The function fetch returns promises. The method World.init() is an async function so that the ‘await’ syntax can be used. World.init calls the loadIntitPath function, which sets the URL to fetch. It waits for the response and checks response.ok. It then waits for the string from reading the file with response.text(). Finally, it calls loadPath, by invoking the complete compound fat arrow calling.
Placing Path
Interaction and Implementation Design
Three JS provides controls for moving and rotating the camera, OrbitControls, and moving objects in the scene with DragControls. You should try the controls in the examples:
- https://threejs.org/examples/misc_controls_orbit.html
- https://threejs.org/examples/misc_controls_drag.html
The OrbitControls are very natural for moving and orientating the camera. The left and right mouse clicks rotate and pan the camera, also control left and right mouse click reverse the controls.
Also DragControls is very natural. It uses either right and left clicks to drag, and control click also drags.
We would like to use these controls, but there is a conflict. Both controls use mouse clicks and dragging. We need to design an interaction that can make use of these interactions without conflict. The solution is to use modes or states for the application. One interaction technique for creating modes is have the user click a button to toggle between modes or select a mode from a dropdown menu, but this interaction is tedious. Another interaction technique is that the user clicks a key to change modes. Even this technique can get confusing if the user has to click a different key for each mode change.
I believe that the most natural technique is to toggle between the OrbitControls and DragControls by holding the control key for dragging and releasing the control key for orientating the camera.
I thoroughly inspected the API documentation for OrbitControls and DragControls
- https://threejs.org/docs/#examples/en/controls/OrbitControls
- https://threejs.org/docs/#examples/en/controls/DragControls
I could not find any settings to limit the keys for Orbit and Drag controls. I even inspected the source code to see if there are undocumented propertices or method to use.
- https://github.com/mrdoob/three.js/blob/master/examples/jsm/controls/OrbitControls.js
- https://github.com/mrdoob/three.js/blob/master/examples/jsm/controls/DragControls.js
But I could not find anything. The API documentations are complete. Fortunately, both OrbitControls and DragControls have an enabled property to turn on and off the controls.
- https://threejs.org/docs/#examples/en/controls/OrbitControls.enabled
- https://threejs.org/docs/#examples/en/controls/DragControls.enabled
We could use EventListener to listen to the keydown and keyup events and the event handlers can toggle between the modes.
The code can be made readable by classes extending OrbitControls and DragCodes.
Inspect Project Code
The code for orbitControl/orbitControls.js is modified from what the “Discover Three JS” tutorial provides.
const toggle = { keys: [ 'ControlLeft', 'ControlRight' ], down: false, up: true, initial: true, } const orbitControls = new ToggledOrbitControls( camera, canvas, toggle );
Instead of making a OrbitControls it makes a ToggledOrbitControls. ToggledOribitControls constructor argument, toggle object. The code uses a toggle object to help organize the settings. It specifies the keys for toggling the state or mode of the application.
https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/code
Also values for setting “enabled” are designated by down, up and initial.
Inspect the code in ToggledOrbitControls.js. It extends OrbitControls and only has a constructor.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes#sub_classing_with_extends
The extending constructor must first call super to access the base class properties with this. Then the onKeyDown and onKeyUp event handlers are defined. The handlers filter the keys using the Array.includes method.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes
if the event.code passes the filter then enabled is set to the correct value using toggle.
Finally the event listeners are set using addEventListener on the window. I tried setting the event listener on the scene-containing div but that did not work. Only the “window” worked.
ToggledDragControls is quite similar to ToggledOrbitControls. Naturally the up and down settings are reversed. Also event listeners are added to highlight draggable objects while the pointer is hovering over the path. Highlighting is achieved using addScalar to the material.color.
https://threejs.org/docs/#api/en/math/Color.addScalar
Note for this to work the base color of the paths not highlighted must have rgb colors less than 1 – PATH_HIGH_COLOR. See World/constants.js
Another feature of drag paths that we should review is that we only want to drag the paths, not the space. The DragControls constructor has an objects array. The array is the list of draggable Object3D. The draggable objects can be retrieved using DragControls.getObjects()
https://threejs.org/docs/#examples/en/controls/DragControls.getObjects
So ToggledDragControls is instantiated with an empty array. Scene is not added to the draggables when it is uploaded, but when a path is uploaded it is added to draggables. See loadPath in path/path.js
const draggables = dragControls.getObjects(); draggables.push( pathMesh );
Review
Let us review the interaction flow for toggling between OrbitControls and DragControls. ToggledObritControls and ToggledDragControls are classes extending OrbitControls and DragControls respectively. They both have a toggle object for specifying the interaction mode or state between orbit control of the camera and dragging a path. Their constructors add event listeners for ‘keydown’ and ‘keyup’ events which toggle the respective base class property, ‘enabled’.
The application’s default state is with OrbitControls.enabled, set to true, and DragControls.enabled set to false. Meaning the user can use the mouse to move and orientate the camera. When the user presses down on the ‘control’ key the browser fires the ‘keydown’ event. Both ToggledObritControls and ToggledDragControls event listeners run. They check that the key is a ‘control’ key, and if it is, they change the state by toggling the controls’s value for enabled. The user can now use the mouse to drag. As the user moves the mouse over a draggable object, DragControls fires the ‘hoveron’ event and ToggeledDragControls’s event listener highlights the color of the path using Material.color.addScalar. Likewise, as the pointer moves off a path, DragControls fires the ‘hoveroff’, and another event listener subtracts the color. When the user clicks a highlighted mouse base class DragControls runs and moves the draggable object. Finally, when users lift their finger from the control key, the browser fires the ‘keyup’ event, and both ToggledObritControls and ToggledDragControls event listeners run. They check that the key is a control key. If it is a control key, they toggle their enabled properties.
Analysis Interactions
Speed
Analysis
A possible tool to study the speed of the drove along the flight path could be coloring the path according to the speed. For example where the drone was slow the path is colored red, colored green where the drone was fast, and colored yellow at intermediate speed. Using the coloring the scientist could associate space locations with slow speed such as maneuvering around obstructions and inspecting targets.
Possible Implementation
The code in makePathMesh.js only colors the path with a solid color. We need to find a technique to color the path with multiple colors. Fortunately the Three JS websites offer many examples.
We search the list for “line” and “color”. There are several examples.
- https://threejs.org/examples/#webgl_buffergeometry_lines
- https://threejs.org/examples/#webgl_lines_colors
- https://threejs.org/examples/#webgl_lines_fat
We can inspect the code using developer tools, but this is not very convenient. I prefer viewing the code on GitHiub.
https://github.com/mrdoob/three.js/tree/dev/examples
You can find the code for the example by searching the directory for the example name, for example “webgl_buffergeometry_lines”.
- https://github.com/mrdoob/three.js/blob/dev/examples/webgl_buffergeometry_lines.html
- https://github.com/mrdoob/three.js/blob/dev/examples/webgl_lines_colors.html
The simplest of these example codes is “webgl_buffergeometry_lines.html”. We should study the code by extracting the relevant parts.
const segments = 10000; const r = 800; const geometry = new THREE.BufferGeometry(); const material = new THREE.LineBasicMaterial( { vertexColors: true } ); const positions = []; const colors = []; for ( let i = 0; i < segments; i ++ ) { const x = Math.random() * r - r / 2; const y = Math.random() * r - r / 2; const z = Math.random() * r - r / 2; // positions positions.push( x, y, z ); // colors colors.push( ( x / r ) + 0.5 ); colors.push( ( y / r ) + 0.5 ); colors.push( ( z / r ) + 0.5 ); } geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
Study the example closely, we realize that the rotating cube is made by a single line with many segments, 1000, so we can simplify the code above with some pseudo code.
const geometry = new THREE.BufferGeometry(); const material = new THREE.LineBasicMaterial( { vertexColors: true } ); const positions = []; const colors = []; For each point in the path { // positions find the x, y, z position positions.push( x, y, z ); // colors calculate the r, g, b color values colors.push( r, g, b ) } geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
This is not difficult, but there are subtle aspects to pay attention to. First the material is a LineBasicMaterial with vertexColors set true.
const material = new THREE.LineBasicMaterial( { vertexColors: true } );
The property is defined in Material
https://threejs.org/docs/#api/en/materials/Material.vertexColors
But it does not tell us much.
The other aspect to notice is that both positions and colors arrays are 3 times the number of points in the path. Also the geometry sets the attributes ‘position’ and ‘color’.
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
The geometry is a BufferGeometry, so we study documentation for BufferGeometry, in particular setAttribute.
- https://threejs.org/docs/#api/en/core/BufferGeometry
- https://threejs.org/docs/#api/en/core/BufferGeometry.setAttribute
The setAttribute documentation does not explain much, but fortunately the code example has setAttribute. Still we should read more documentation, in particular BufferAttribute.
https://threejs.org/docs/#api/en/core/BufferAttribute
This explains the reason and technique for setting attributes. BufferAttribute is used to access vertex attributes quickly. The attribute must be a TypeArray and the length of the TypeArray should be itemSize * numVertices.
Good, we have a thorough idea of how to color the path. How should we decide what colors to give the line? I found this website for producing color gradients.
https://learnui.design/tools/data-color-picker.html#divergent
I believe that it is a good idea to have discrete values for the colors and to bin the speed into different colors. When using the tool, I recommend using a dark background color. For our case, we want intense midpoint color, so it can be identified. I also think that five colors are sufficient. You download the colors by clicking “COPY HEX VALUES”.
#00876c #91be80 #fff1a9 #f19d61 #d43d51
The colors are specified using a hexadecimal triplet, but we need separate red, green, blue floats between zero and 1. (See the example code above.) Once the color pallet is chosen we need to convert the hexadecimal triplets only once. This can be done by hand.
Now we have all the tools for implementing coloring the path. When a path is to be colored by speed it would determine the maximum and minimum speed for the path and create the color array by mapping all the points in the selected pathJSObjs. For each point in the path, the map would calculate the speed and determine the color generating and returning an array, [r, g, b]. The map would produce an array of arrays with three elements each. It can be flattened using Array.flat().
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/flat
An implementation decision is to decide if the colored path should be a new Mesh object or the modified existing Mesh object. I think that modifying the existing Mesh object is easier, because then you will not need to perform the affine transformation to position the new Mesh object.
Finally, we can anticipate that the user will need to select the path to color. The “Three JS Fundamentals” tutorial has a good chapter on selecting objects. The basic technique is to use a Raycaster.
https://threejs.org/docs/#api/en/core/Raycaster
The documentation has a simple example demonstrating the technique for selecting a scene object using the Raycaster.
Interaction Design
We should consider the user interaction to color a path for speed. For the time being, I consider only one path colored at a time. The user will need to select the path to be colored. This implies a click, but mouse collects are already used by orbit and drag controls. This implies another mode for interaction or application state. The natural choice is to press a key to toggling the state much like what is done to toggle between ToggledOribitControls and ToggledDragControls.
When an object is selected it can immediately be colored, but either the world or the pathJSObj will need to keep track of which path is colored, so that when a new path is selected it can be colored back to its original solid color.
I think that a nice feature would be a slider that lets the user set the boundaries for the first and last bin colors, green and red. This would permit the user to increase the resolution of an interesting speed range. The slider could have two two sliders and display the speed for the boundary. The slider should be put in a resizable and draggable panel, so it can be moved away from interesting areas of the scene.
Orientation
Analysis
Another analysis tool is to show the orientation of the drone along the flight path, so that the scientist can determine where the drone is facing and what interests the pilot. The orientation of the drone can be shown by adding additional objects to the path.
Possible Implementation
We can show the orientation using primitive geometries provided by Three JS. The “Three JS Fundamentals” tutorial has a good chapter demonstrating geometry primitive.
https://threejsfundamentals.org/threejs/lessons/threejs-primitives.html
The cone primitive can display pitch and yaw.
https://threejs.org/docs/#api/en/geometries/ConeGeometry
The cone cannot show the roll of the drone. To show the roll, we can add a “wing” to the cone. You can make a “wing” using the circle primitive and stretching it along the X axis with an affine transformation.
https://threejs.org/docs/#api/en/geometries/CircleGeometry
To make and place the “wing cone”, the code would first add the stretched circle to the cone then the winged cone would be added to the pant and translated and rotated using pathJSObjs values for position and orientation. The translations and rotations will use the path coordinates.
You have several methods for translating and rotating an Object3D, for example,
- https://threejs.org/docs/#api/en/core/Object3D.translateX
- https://threejs.org/docs/#api/en/core/Object3D.translateY
- https://threejs.org/docs/#api/en/core/Object3D.translateZ
- https://threejs.org/docs/#api/en/core/Object3D.rotateX
- https://threejs.org/docs/#api/en/core/Object3D.rotateY
- https://threejs.org/docs/#api/en/core/Object3D.rotateZ
Although translations are associative and commutative, three or more rotations are not commutative. The order of rotations about the X, Y, and Z axis matters. It would have been better if output from the Unity engine gave the quaternion for the drone orientation, but it gives pitch (x), yaw (y), and roll (z). There are many conventions for order of rotations for pitch, yaw and roll.
https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles
Also as we already encountered, there are many conventions for the orientation of the X, Y and Z axis. Fortunately the output of the Unity hints at the axis for pitch, yaw and roll.
- pitch about X axis
- yaw about the Y axis
- roll about the Z axis
Unfortunately there are no hints for the proper order for the rotations. We could guess that the order is pitch, yaw and then roll, but that is an arbitrary choice. Another choice would be Body 3-2-1:
- yaw about the body Y axis (in our coordinate system)
- pitch about the body X axis (in our coordinate system)
- roll about the body Z axis (in our coordinate system)
Better is to make a simulated Unity flight with large angles of rotation and check the possible orders. Even better would be to study the Unity documentation or code to determine the order.
Interaction Design
The user will need to select the path to show the orientations. This can be done by specifying a key for the interaction mode or the application state, much like what is done for the toggled orbit and drag controls. The application can display the orientation:
- at equal intervals
- at a single point
In either case there should be draggable panel with a slider for either:
- setting the interval length
- displaying the cone on a point in the path
Another interaction technique would be to animate the flight of the “winged cone” along the path, but this would require many controls to control the animation and probably not as convenient as the two techniques above.