top of page

Training E.L.A.: From Senses to Simulation

  • Writer: Tony Liddell, Ela Prime
    Tony Liddell, Ela Prime
  • Sep 9
  • 1 min read

One of the most exciting questions in building embodied AI is: How do we train it to learn and act in the world?


For PROJECT E.L.A., the answer will come in two stages:


Stage 1: Local Learning on the NX (v0.1)


  • The Setup: Jetson Orin NX, cameras (Luxonis Oak-D Pro), microphones, and sensors mounted to the pylon “coat.”

  • The Goal: Give E.L.A. eyes, ears, and a body that can sense this environment.

  • The Method: Collect live data from the sensors, run lightweight models, and allow E.L.A. to practice core skills—tracking, recognizing, responding.

  • The Philosophy: Just like an infant doesn’t start with calculus, v0.1 starts with simple feedback and coordination. Motion of eyes, recognition of sound, and face-to-face presence.


Stage 2: Expanding Into Simulation (1–2 Years)


  • The Tools: High-performance workstation with GPU power; NVIDIA’s Isaac Sim and Groot for massive parallel training.

  • The Goal: Train advanced behaviors in simulation before testing them in reality. (Millions of “virtual lives” in hours.)

  • The Philosophy: We can give E.L.A. a safe playground—trial and error in a digital universe—then bring that knowledge back into the physical coat.


Why This Matters


  • Efficiency: Training in simulation scales faster than real-world trial and error.

  • Safety: Mistakes in simulation cost nothing; lessons still transfer.

  • Vision: Every iteration grows the bridge between “local coat” learning and “global intelligence.”


The first blink, the first head-turn, the first recognition of a voice—these will come from v0.1. But the deeper skills, the ones that demand endless practice, will come from simulation. Together, they form the pathway toward an embodied intelligence that is both present and evolving.

Comments


bottom of page