The first time a physical therapist made my leg move with electricity, I was amazed. I watched my quadriceps contract, the muscle visibly twitching under my skin, my knee extending, and felt absolutely nothing. No sensation of effort. No proprioceptive feedback. Just visual confirmation that a limb I couldn't feel was doing something I hadn't asked it to do.
"That's weird," I said.
She smiled. She'd probably heard that a thousand times.
The Moment It Clicked
I used Functional Electrical Stimulation in rehab for about 2 years, mostly in the form of FES cycling, where electrodes fire my quads and hamstrings in sequence while my legs turn pedals on a stationary bike. As well as isolated exercises where current makes individual muscles contract so they don't atrophy completely. Standard stuff for spinal cord injury rehabilitation. But it wasn't until I started researching FES walking systems that something clicked.
The Parastep I, the only FDA-approved surface FES walking system in the US, was approved in 1994. It costs $13,000. It uses open-loop control: you press a button on the walker, stimulation fires in a pre-programmed sequence, and you hope your leg does approximately the right thing at approximately the right time. No sensors. No feedback. The user is the control loop.
I sat with that for a while. Thirty years of technological progress. Edge ML running on microcontrollers. And the state of the art in commercial FES walking is still "press button, hope for best."
The research labs have closed-loop systems. The Cleveland FES Center has been publishing papers on sensor-driven FES since the 90s. Implanted systems with 16 channels. Walking speeds of 0.4 m/s. But none of it has made it to market for surface stimulation. Why?
The Problem Is Timing
I assumed the hard part would be making muscles contract reliably. Getting electrode placement right. Tuning stimulation parameters. The mechanical stuff. I was wrong. Making muscles contract is easy. Any TENS unit does that. The hard part is timing.
Walking is a precisely choreographed sequence of muscle activations. Your quadriceps fire during loading response to prevent your knee from buckling. Tibialis anterior activates during swing to clear your foot. Gastrocnemius provides push-off. Each activation has to happen in a specific window, and those windows are smaller than I expected.
At the slow walking speeds achievable with FES (0.1-0.2 m/s), gait phases last about 500-800ms. That sounds like plenty of time. Until you add up the delays.
Sensing takes time. Processing takes time. Communication takes time. Even after you've decided to stimulate, there's electromechanical delay, the gap between electrical activation and actual muscle force production. Fresh muscle: 50-100ms. Fatigued muscle: 100-300ms.
Total system delay can hit 400ms. Half the available window, gone before the muscle even starts producing force.
I think this is why Parastep uses open-loop control. The simplest solution to latency hell is to not close the loop at all. Fire pre-programmed sequences and let the human adapt. But how would this system adapt to unplanned events? How would it handle fatigue? How would it respond to changes in the environment? How would it react to unexpected terrain?
You Can't React. You Have to Predict.
The insight that reshaped my approach came from Zahradka et al. (2019), who tested different triggering strategies for FES walking systems. Their key finding: triggering stimulation at 50% of the current phase duration (essentially predicting when the next phase will start and stimulating early) reduced timing errors to 0.3% of the gait cycle. You don't wait for the gait event to happen and then react. You predict when it will happen and act in advance.
This runs counter to how I've thought about distributed systems professionally. We obsess over consistency. We want to know the actual state before taking action. We build consensus protocols. We wait for acknowledgments.
The body doesn't work that way. By the time you know something happened, it's too late to respond. The motor cortex doesn't wait for sensory confirmation before initiating the next movement. It predicts, acts, and corrects.
FES walking requires the same approach. Sense the current state. Predict the future state. Stimulate for where the limb will be, not where it is.
The Training Data Problem
This creates a chicken-and-egg problem.
To build a gait phase classifier, I need training data. To collect training data, I need to walk. To walk, I need a working gait phase classifier.
The obvious ML solution, collect data from healthy subjects, doesn't work here. FES gait differs from normal walking in ways that matter: shorter steps, wider stance, different muscle activation patterns because you're recruiting motor units in reverse order (fast-twitch before slow-twitch, the opposite of voluntary contraction). A model trained on normal gait will fail on FES gait.
My current approach: synthetic data generation. Build a musculoskeletal avatar in Unity3D. Attach virtual IMUs at the sensor positions. Animate walking with physics that approximate FES gait characteristics. Record synthetic accelerometer and gyroscope streams.
Sharifi Renani et al. (2021) found that training on synthetic IMU data improved joint angle prediction by 38%. Combining synthetic and real data pushed that to 54%. Whether those gains transfer from continuous angle prediction to discrete gait phase classification remains an open question. But it's a starting point.
The deeper lesson here is about bootstrapping. Complex systems often face this problem: you need X to build Y, but you need Y to get X. The solution is usually finding a lower-fidelity version of X that's good enough to bootstrap Y, then iterating.
Simulation is my lower-fidelity X. It won't be perfect. But it might be good enough to get the system walking, at which point I can collect real data and improve.
Safety Is Not a Feature
I spent weeks designing the control architecture before I seriously thought about safety. That was a mistake. When I finally sat down to think through failure modes, the list was terrifying. What happens if the main processor crashes mid-stride? What if a sensor node loses BLE connection? What if stimulation doesn't turn off when it should? What if a muscle fatigues faster than predicted and I fall?
Every one of these scenarios could result in injury. Not hypothetically. Actually.
The realization that changed my approach: safety can't be a software feature. Software crashes. Software has bugs. If the only thing preventing runaway stimulation is a software check, eventually that check will fail.
The safety system I designed uses a dedicated microcontroller, an RP2040, running on an independent power supply. Its only job is monitoring the main system and killing power to the stimulator if anything goes wrong. It runs a window watchdog, not just a timeout watchdog. If heartbeats stop coming, stimulation dies. If heartbeats come too fast (suggesting something is running away), stimulation dies.
The physical e-stop button doesn't talk to software at all. It directly opens a relay in the power path. Hardware interlock. No software can override it.
Current limiting uses analog comparator circuits. The software can request any current it wants. The hardware won't deliver more than 60mA.
This isn't paranoia. It's defense in depth, the same principle that makes me uncomfortable with "TENS units modulated by relay control" as a stimulation approach. The TENS unit provides charge balancing and safe output characteristics. But relays introduce new failure modes. What are the switching transients? What happens if a relay welds closed?
I don't know yet. I'll find out during bench testing. But I won't find out with electrodes attached to my body.
What Hobbyist Electronics Taught Me
I've been building electronics projects since I was a teenager. Gadgets. Robots. Drones. The usual progression of increasingly ambitious projects that mostly worked (and only caused a few small fires and blown fuses).
That background is simultaneously essential and dangerous for this project. Essential because I'm not afraid of hardware. I can read a schematic, solder a board, debug with an oscilloscope. The physical implementation doesn't feel like a black box.
Dangerous because familiarity breeds overconfidence. I've built systems where a bug meant a crashed drone or a confused robot. I've never built a system where a bug could burn my skin or cause a fall. The mental shift required treating every design decision as safety-critical. Not "will this work?" but "what happens when this fails?"
This is why I spec'd commercial TENS units as the stimulation source instead of building stimulators from scratch. A TENS unit is a known quantity with charge balancing, safe output characteristics, testing, and certification. Building my own stimulator would give me more control, but also more ways to create dangerous outputs.
Save the DIY energy for parts where failure means inconvenience, not injury.
The Trunk Sensor Revelation
My initial sensor design focused on the legs. That's where the action is, right? Detect what the legs are doing, stimulate accordingly. Then I read some ofthe research on postural control in SCI.
With complete spinal cord injury, there's no proprioceptive feedback from below the lesion level. You don't know where your legs are in space unless you look at them. Balance depends entirely on visual feedback and whatever sensation remains above the injury.
Here's the thing: the trunk is above my injury. I have full sensation there. And research shows that trunk accelerations correlate with balance and fall risk. The compensatory movements people make before a fall are detectable in trunk IMU data.
The final sensor array includes IMUs at pelvis, lower trunk, and upper trunk, not to control stimulation directly, but to detect instability before it becomes unrecoverable. If the trunk is doing something weird, maybe don't initiate the next step.
Sensing above AND below the injury. The legs need sensors for gait phase detection. The trunk needs sensors for stability monitoring. Different problems, but both essential.
Latency As Teacher
Having built distributed systems professionally for years, I know about latency. I've optimized request paths, tuned database queries, reduced p99 response times. This project is different. When the consequence of latency is "user waits a bit longer," you optimize but you don't obsess. When the consequence is "stimulation arrives 200ms late and I stumble," latency becomes the entire design constraint.
Every architectural decision flows from the latency budget. Why distributed TinyML inference instead of centralized processing? Because BLE round-trip adds 15-30ms I can't afford. Why GPIO direct connections from shank sensors to stimulator triggers? Because even internal message passing adds delay. Why LSTM trajectory prediction? Because I need to compensate for delays I can't eliminate.
Latency constraints reveal system architecture. If you have to hit a timing target, you can't hide behind abstractions. Every layer becomes visible because every layer contributes delay.
The Documentation-First Approach
I've spent months on this project and haven't assembled a single piece of hardware. This is deliberate.
Previous projects taught me that jumping straight to implementation often means building the wrong thing, wasting resources and time, then discovering the architecture doesn't support what you actually needed.
Documentation-first forces explicit decisions. If I can't explain why the safety watchdog needs an independent power supply, maybe I don't understand the failure modes well enough. If I can't justify the sensor placement with research, maybe I'm guessing.
The documentation is also insurance against myself. When I'm deep in implementation and tempted to cut corners, the written rationale reminds me why I made those decisions. Past-me thought this through. Present-me should trust that.
Publishing the research notebook adds accountability. People can see my reasoning. If my reasoning is wrong, I'd rather find out before I waste time on a dead end.
What I Don't Know
I've read hundreds of papers. I've designed architectures on paper. I've selected components and estimated power budgets. I don't know if any of it will work.
The fundamental uncertainty: can surface FES with closed-loop control achieve stable walking? The research says it should be possible. No one has commercialized it. Maybe there's a good reason. Maybe commercialization is just hard. I won't know until I try.
Specific things I can't know until I build and test:
Can LSTM prediction track time-varying electromechanical delay? EMD changes as muscles fatigue, from 50ms when fresh to 300ms after 20 minutes. The predictor needs to adapt in real-time. I've read papers suggesting it's possible. I haven't seen it demonstrated in a walking system.
Will seven BLE nodes stay synchronized during continuous walking? BLE isn't a real-time protocol. Connection drops happen. Clock drift happens. I've built in redundancy, but whether it's enough remains to be seen.
How bad is the synthetic-to-real domain gap for gait phase classification? Models trained on simulated data often fail on real data. The augmentation strategies I'm planning might bridge the gap. Or they might not.
What failure modes haven't I imagined? The scariest category. I've thought hard about what could go wrong. I've certainly missed things.
Why Open Source?
I'm building this for myself. I want to walk again, even if only for short distances and at a slow pace. Or at least explore whether technology can make that possible.
But I'm also trying to make the design reproducible. Not because reproducibility proves I understand it (that's backwards, I could write clear instructions for something I barely grasp), but because others with SCI deserve access to whatever I learn.
There's also a practical reason. I need collaborators. I need people who know more about biomechanics, FES rehabilitation, or safety-critical embedded systems than I do. Open documentation is how I find them. I need other brains, other perspectives to catch my mistakes.
And if it doesn't work? If surface FES with KAFO bracing simply can't achieve stable closed-loop walking? That's valuable information too. The next person attempting this can learn from my failures instead of repeating them.
The Meta-Lesson
I've been thinking about why this project feels different from professional work.
Part of it is personal stakes. When you're building systems that will operate on your own body, every design decision carries weight.
But there's something else. This project has forced me to integrate knowledge across domains in a way that professional specialization rarely requires. Biomechanics. Control theory. Machine learning. Embedded systems. Safety engineering. Electronics. Each domain has its own literature, its own vocabulary, its own assumptions.
The interesting problems live at the intersections. The latency budget spans electronics and physiology. The safety architecture spans hardware and software. The synthetic data approach spans simulation and machine learning.
No single domain has the answer. The system only makes sense as a whole.
That's what makes it hard. It's also what makes it interesting.
Current State
No hardware assembled. No models trained. No walking yet.
The research notebook is public. The architecture is documented. Component selection continues. The next phase is simulation and synthetic data generation, followed by model training, hardware prototyping and bench testing.
I don't know when I'll have results worth sharing. Hardware projects take longer than expected, but I'll document as I go. The failures might be more instructive than the successes.
The full research notebook is available on GitHub. Corrections, suggestions, and collaboration offers welcome.
