Phase 01: Neuro-Muscular Interface

Control devices with silent thoughts: Our neuro-muscular interface reads subtle vocal signals, allowing effortless, silent AI interaction, enhanced by adaptive learning.

We’re creating a new interface that combines Electroencephalography and Surface Electromyography to enable silent, thought-driven control of devices.

Capabilities

Silent, Thought-Driven Control

You don’t need to speak out loud or physically interact with your device. By combining brain signals EEG)with subtle vocal muscle movements (sEMG), the system interprets your thoughts and subvocal signals to take action.

Adaptive and Personalized Learning

Over time, the system learns from your interactions, gradually becoming more responsive and aligned with your unique usage habits. The more you use it, the more intuitive it becomes, offering a truly personalized experience.

Effortless Multi-Device Integration

The technology works across a variety of devices, from computers to smart home systems, making control hands-free and efficient. Whether for productivity, entertainment, or accessibility, this integration opens new possibilities for how you engage with technology.

Accessibility for Everyone

By combining both EEG and sEMG, this system can be especially useful for people with speech or motor impairments, providing a way to interact with technology through thought alone. While this area is still being explored, it holds great promise for accessibility.

Natural User Experience

The system is designed to feel as natural as possible. With silent interactions and adaptive learning, you’re not just using technology—you’re interacting with it in a way that feels smooth, intuitive, and unobtrusive.

Introducing aLAM

Our Advanced Learning Action Model enhances this system by intelligently adapting to your behavior and executing tasks on your behalf. It can handle a wide range of tasks, assisting you seamlessly in both language-based and non-language-based activities. In the video below, Users can purchase items by simply visualizing a broad category in their mind

Prototype 1

The first prototype of our BCI marks a significant leap toward seamless human-AI interaction. This prototype harnesses state-of-the-art advancements in noninvasive neural signal detection and processing, decoding subvocal signals and thoughts. Through a combination of surface electromyographyand electroencephalographytechnologies, it translates silent readings and neural activity into actionable data, enabling hands-free device control focusing on dec and interaction with AI models.

91.5%

Detection Accuracy

3500+

Vocabulary

37%

Word Rate Error

800ms

Latency

Our core focus with Prototype 1

is to achieve accurate signal interpretation by integrating simulated sEMG data with actual EEG readings, exploring speech decoding and text generation. The device leverages DEMANN for advanced machine learning and is designed to progressively enhance its decoding capabilities through self-learning and reinforced learning techniques.

This wearable device, built as a necklace for ease of use and discreet wear, is designed to be low-cost and efficient. Our focus in this early phase is on maximizing accuracy in pre-model training, setting the foundation for a future of highly intuitive, real-time AI-driven communication.

With this prototype, we are redefining how users can interact with technology, aiming to provide a natural, thought-driven interface that requires no physical movement. It is the first step in a journey toward true human-AI symbiosis.

Looking Ahead

Orefining the Neurontech AI BCI to achieve over 95% real-time accuracy in decoding thoughts and subvocal speech. We aim to make the device consumer grade, the size of a coin, capable of 12+ hours of continuous use, and expands beyond assistive technology. The long-term vision includes commercialization and partnerships with tech and enterprise sectors, positioning the BCI at the forefront of human-AI interaction. its application

Acknowledgment

Research Lead: Etna Ozkara,Chris Geng