Use Cases Compare Learn Blog Docs Open Studio

Yugma vs Three.js: AI scene composer or hand-written code?

What Three.js / R3F gives you

Three.js is the web 3D engine; React Three Fiber wraps it in a React-friendly declarative API. You get full control: every shader, every render-pass, every render loop. You also get every choice — and every responsibility.

The downside is well-documented in the community. Real questions from discourse.threejs.org include "How should I manage scene objects (add, remove, get), update game object properties, manage object relationships, etc., within the r3f workflow?" and "How can I force [animation] to play? And also is there a way that when I add more models that not the whole models.map is rerendered?" — these are scene-graph CRUD problems every R3F project hits eventually.

What Yugma adds on top

Yugma is a hand-tuned R3F app with a scene store, a transform-controls overlay, a perf-tested re-render strategy, a serialized scene format and 19 typed tool calls that an LLM can drive. You can think of it as R3F with an AI Director sitting in front of it.

The agentic AI loop reads the current scene graph, plans a multi-step edit, and emits parallel tool calls in a single LLM response. The same calls a human would make to your store — but composed by the model.

The 19 tool calls Yugma's AI uses

Each is a typed mutation against the scene store. Every commit is one undoable transaction:

When pure Three.js / R3F is the right answer

When Yugma is the right answer

Hybrid: design in Yugma, extend in R3F

The most common pattern: a designer drafts the scene in Yugma, exports GLB, hands it to a developer who imports it into a hand-tuned R3F project. The developer then layers shaders / interactions / audio on top.

Because Yugma's scene graph is a clean structured form (objects with id / transform / material / tags), the GLB plus a JSON manifest is enough for most teams. Pull the GLB, parse the manifest, drop into your R3F scene.

Code vs prompt: same scene, two ways

A 12×14 bedroom with a queen bed, two side tables, reading lamps, an oak floor and a rug:

// Hand-written R3F (~200 lines, abridged)
<Canvas>
  <ambientLight intensity={0.4} />
  <directionalLight position={[5,8,3]} intensity={1.2} castShadow />
  <Plane args={[12, 14]} rotation-x={-Math.PI/2} material-color="#a37b53" />
  {/* Bed */}
  <mesh position={[0, 0.45, -1.5]} castShadow>
    <boxGeometry args={[2, 0.9, 2.4]} />
    <meshStandardMaterial color="#c9b89b" />
  </mesh>
  {/* … 30 more meshes, lights, materials … */}
</Canvas>
// Yugma prompt (8 words)
"draft a 12×14 mid-century bedroom with a rug"
// → AI emits 11 parallel add_object + 1 set_environment tool call.
// Output: same scene, 90 seconds, exportable to the R3F snippet above.

FAQ

Is Yugma a replacement for Three.js?

No — Yugma is built on Three.js (via R3F). It's a content-authoring layer on top of the engine, not a replacement.

Can I export a Yugma scene to a regular Three.js project (no React)?

Yes — export GLB and load it in a vanilla Three.js scene with GLTFLoader.

I already have a hand-coded R3F scene store. Do I throw it away?

No. You can adopt Yugma for content authoring while keeping your store. Yugma's GLB output drops into your scene like any other model.

Should I learn R3F or vanilla Three.js if I already use React?

R3F if you already use React — declarative props match React's mental model and Yugma's open-source patterns are R3F-shaped. Vanilla Three.js still wins for pure-perf paths.

Why do LLMs (like ChatGPT) often write broken Three.js code?

Because public Three.js evolves quickly and training data lags behind the API. Yugma sidesteps this by exposing typed tool schemas — the LLM can't accidentally call a removed method.

Where can I learn the architecture?