STIC-1 Rapport d'Exercice 6
STIC-1 University lecturer : Nicolas Szilas, Assistant : Djamileh Aminian
Author : Femmy Priscillya Antolinez
Publication date : 13 January 2026
1.0. Links
Créer en 3D avec JavaScript : Découvrir Three.js2.0. Objectives
The goal of this project is to give an introduction to Three.js and its potential through an interactive Reveal.js slide deck. Three.js runs as a background layer, and each slide triggers a different 3D scene. 3D visualisation is commonly used to help learners develop spatial understanding of scale, depth, 3D form, and perspective. Therefore, camera framing and lighting are important to make the scenes feel realistic. Light and shadow give form to the 3D world, otherwise, everything would look flat. This exercise is meant to demonstrate to learners that 3D can be interactive using JavaScript. I'm hoping that this would be useful and could inspire all beginners in code as well as future MALTT students in TECFA.
3.0. Design
Building these interactive pages involved many trial and error moments (bugs, wrong paths, objects not showing, and interactions conflicting with slide navigation). As a beginner in JavaScript, my main goal was to keep the architecture simple and stable.
The project combines two libraries with different roles: Reveal.js manages the presentation (slides, navigation, and layout), while Three.js renders the 3D graphics in the background. Instead of creating a new Three.js setup for every slide, I use one continuous 3D engine to reduce complexity and prevents performance issues : one renderer, one scene, one camera, and one animation loop.
A dedicated container (<div id="three-root">)
receives the WebGL canvas created by Three.js. On top of the
canvas, I display a fixed overlay called a HUD (heads-up display),
which keeps the slide text readable while the 3D scene stays
behind. The HUD is updated by cloning the current slide’s
.panel element and inserting it into
<div id="hud">.
Each slide that needs a 3D scene includes an attribute like
data-three="slide6". In main.js, a
JavaScript object called slideHandlers maps these
keys to functions. Each function builds the 3D content for that
slide (adds objects, lights, models, and sets the camera).
- slide1: Earth + atmosphere + star particles
- slide2: connected nodes (simple network)
- slide3: geometric edges (thick wire structure)
- slide4: wireframe sphere
- slide5: external glTF model demo (floating fox, “space” mood) from Sketchfab
- slide6: lighting simulation with keyboard controls
- slide7: architectural corridor/vault scene (camera placed “inside”)
- slide8: interactive geodesic dome (toggle surface/struts, change detail)
- slide9: animated robot model (skinning + facial morph targets), based on an official Three.js example
- slide10: Tokyo scene model with keyframe animation and lighting mood switching, based on a Three.js example
A function called applySlide() is responsible for
switching the 3D content. When Reveal.js changes slide,
applySlide() detects the current
data-three key, calls the matching slide function,
and updates the HUD text.
To avoid leaks (for example: a hidden floor staying hidden on the next slide, or lights staying dim), every slide starts from a clean baseline using reset logic (camera preset, fog, lights, and controls restored). This makes slide-to-slide behavior predictable.
4.0. Production
I started from the Reveal.js template used in the STIC-1 exercise ( stic-1-revealjs-template-main ) and built a 10-slide presentation structure. To learn the basics of Three.js, I followed a beginner tutorial on YouTube ( YouTube beginner tutorial ) and then studied official examples from the Three.js website and repository.
For external 3D assets, I used glTF/GLB models: the “floating fox” model was downloaded from Sketchfab, and the robot/Tokyo demo models come from the official Three.js examples.
In a typical Three.js project, you create a scene, a camera, and a
renderer, then you add 3D objects into the scene. The renderer
draws the scene into a WebGL canvas. In my project, the canvas is
not appended to
document.body directly, instead it is inserted into
#three-root so it can sit behind the Reveal.js
slides.
Integrating Three.js into Reveal.js was more challenging than a standalone demo that the website provides, because both libraries want to control the screen. Early versions failed because the 3D canvas was hidden by the slide layers, or because keyboard input triggered Reveal navigation instead of 3D interactions. To solve this, I used a stable layering strategy (canvas behind, text overlay above) and I scoped keyboard interactions so they only run on the relevant slide.
For styling, the layout is based on Reveal.js CSS from the
template, plus my own stylesheet
(assets/css/style.css) to handle the HUD layout, the
.panel content design, and mouse interaction using
pointer-events (so the 3D canvas can receive
drag/zoom input while slide text remains readable).
The main logic is implemented in assets/js/main.js. A
continuous animation loop called tick() (using
requestAnimationFrame) keeps the 3D scene alive.
Slide changes do not stop the loop. Instead, the code replaces the
content inside a dedicated Three.js group (called
demo) depending on the active slide.
The JavaScript logic is as follows:
-
Import Three.js and add-ons:
OrbitControls(mouse navigation),GLTFLoader(load 3D models), andDRACOLoader(decode compressed models). - Create the Three.js “engine” once: renderer + scene + camera + controls.
- Add shared elements used across slides (lights, fog, floor, star background).
-
Use Reveal.js events (
ready,slidechanged) to detect the current slide. -
If the slide has
data-three, call the matching function inslideHandlersto build the 3D content for that slide. -
Update the HUD by cloning the slide’s
.panelcontent into#hud. - Handle keyboard input only on specific slides (for example: lighting controls only on slide6, dome controls only on slide8, robot animation controls only on slide9).
- Keep the render loop running continuously, but update heavy animations (mixers) only on the slides where they are needed.
- Avoid memory leaks by removing and disposing objects that are no longer displayed, while caching large models (fox/robot/tokyo) so they do not reload every time.
5.0. Self-assessment
Creating 3D with JavaScript code opens a new door to another
unlimited creativity dimension. Besides Reveal.js (which controls
the slide structure and navigation), I chose Three.js as a
third-party library because it runs directly in the browser
(desktop or mobile) without any installation. It's a great tool to
demonstrate 3D interactivity in a web format.
While working on this exercice, I discovered its large ecosystem
of reusable components (controls, loaders, helpers, and many
examples available on the three.js website). The Three.js official
website provides many ready-to-use examples and learning
resources, which I used as a base and adapted to integrate into my
slides, but adapting them to the single renderer and slide-driven
scenes format was the main challenge (camera framing, resetting
state between slides, and loading/resources).
These
were some of the main challenges I faced:
-
Leaks between slides (fog, floor, controls
constraints, etc.). To avoid this, a global reset mechanism is
introduced at the start of each slide
(
SetCameraForMode). -
Visibility issue vs camera + framing: The 3D
model was not visible even though it loaded (status 200 in the
console). Apparently, the model loaded but the camera was not in
the correct position. ChatGPT suggested
frameObject()to automatically fit the model to the view. -
OrbitControls and mouse interaction problems.
In the beginning, OrbitControls wasn’t responding, this was due
to the wrong DOM element target and CSS layers that blocked
pointer events. Reveal layers had pointer-events that were
disabled/enabled in a complex way, and the CSS setup affected
whether mouse input reached the canvas. Solution proposed by
ChatGPT was to attach OrbitControls to the correct element:
new OrbitControls(camera, renderer.domElement), ensure the canvas had pointer events, and explicitly enable pointer events on the canvas. -
Keyboard interactivity that conflicts with Reveal.js
navigation.
ChatGPT introduced event handling like
preventDefault(),stopPropagation(), and a capture-phase listener.
Since this is the last exercise of the STIC-1 course in this semester, I’d like to share how much it has changed the way I see coding. At the beginning, looking at code scared me. I’m truly grateful for the “learning by doing” approach, because it helped me discover the creativity and the (almost) unlimited possibilities that code can offer. For this exercise, I relied a lot on AI as a guide and mentor. I could also have asked for help from someone on the online forum, but AI makes my workflow smoother. I can test an idea immediately, get feedback right away, and keep moving without waiting for someone to reply. This also raises an ethical question for me, if we don’t use AI as a shortcut, but instead engage actively, trying, understanding, and learning from its explanations, does that make its use more acceptable? We’re living in an age where we’re witnessing how AI can transform the way humans learn. It can encourage passivity if we let it do the thinking for us. However, if we stay curious, ask “why?”, verify, and use feedback to improve our own understanding, it can also support active learning. This tool is powerful. And like super power in super hero movies, using it requires self-regulation, critical thinking, integrity, and responsibility. Nevertheless, the quality of learning still depends on the human behind it. In this process, however, it has changed my perspective on learning and on coding as a creative tool.
6.0. Bibliography
7.0. Resources
- Fox, G. (n.d.). Littlest Tokyo [3D model]. ArtStation. https://artstation.com/artwork/1AGwX (CC BY 4.0)
- MALTT-STIC. (n.d.). stic-1-revealjs-template (GitHub repository). Retrieved December 17th, 2025. https://github.com/MALTT-STIC/stic-1-revealjs-template
- MDN Web Docs. (2024, December 4). WebGLRenderingContext: lineWidth() method. Retrieved December 27th, 2025. https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/lineWidth
- OpenAI. (2025). ChatGPT (GPT-5 Model) https://chatgpt.com
- Prinavu. (n.d.). Floating Fox [3D model]. Sketchfab. https://sketchfab.com/3d-models/floating-fox-497de71996cf4f4d86e387028938aceb (CC BY 4.0)
- Quaternius & McCurdy, D. (n.d.). Animated Robot (RobotExpressive) [3D model pack]. Quaternius. https://quaternius.com/packs/animatedrobot.html (CC0 1.0)
- Robot Bobby. (2024, February 28). Getting Started with Three.js: A Beginner's Tutorial [Video]. YouTube. Retrieved December 17th, 2025. https://www.youtube.com/watch?v=XPhAR1YdD6o
- STIC: STIC I – exercice 6 (Freya). (n.d.). EduTechWiki. Retrieved December 17th, 2025. https://edutechwiki.unige.ch/fr/STIC:STIC_I_-_exercice_6_(Freya)
- Three.js. (n.d.). Fundamentals. Retrieved December 27th, 2025. https://threejs.org/manual/#en/fundamentals