Our Team

A multidisciplinary team pushing the boundaries of AI hardware interfaces and peripheral compatibility. We're building the future of intelligent, adaptive hardware systems.

Core Team

Meet the researchers and engineers building imply+infer

Méschac Irung

Méschac Irung

Founder & Lead Researcher

Leading research in AI hardware interfaces and peripheral compatibility. Specializing in kernel-level driver intelligence and edge AI systems.

Théo Balick

Théo Balick

Frontend Engineer

Building intuitive interfaces for complex hardware systems. Expert in React, Next.js, and WebGL visualizations.

Glodie Lukose

Glodie Lukose

Frontend Engineer

Crafting responsive and performant web applications. Focused on developer experience and design systems.

Bernard Ngandu

Bernard Ngandu

Backend Engineer

Developing robust backend systems and APIs for hardware abstraction. Expert in system programming and Linux internals.

Trusted by Industry Leaders

Building with the best tools and platforms in AI, hardware, and infrastructure

NVIDIA Logo
OpenAI Logo
Docker Logo
GitHub Logo
Linux Foundation Logo
Vercel Logo
Anthropic Logo
Next.js Logo

Technology Partners & Tools

We collaborate with industry-leading platforms and open source projects to build the most advanced AI hardware interface solutions.

Hardware Platforms

NVIDIA

Jetson platform for edge AI computing. Our primary hardware platform for embedded AI applications.

Raspberry Pi

ARM-based single-board computers. Testing cross-architecture compatibility.

AI Frameworks

OpenAI

Whisper speech recognition model. Powers our voice assistant and transcription capabilities.

Anthropic

Claude AI for research assistance and development tooling.

Ollama

Local LLM hosting framework. Enables running Gemma, Qwen, and other models on edge devices.

Llama.cpp

Efficient LLM inference engine. Optimized for running large language models on consumer hardware.

Roboflow

Computer vision dataset management and training. Streamlines model development and deployment.

YOLOv8

Real-time object detection framework. Industry-leading performance for edge inference.

TensorRT

NVIDIA's high-performance deep learning inference optimizer.

Infrastructure

Docker

Containerization platform. Ensuring reproducible deployments across edge devices.

GitHub

Open source collaboration platform. Hosting our research code and community projects.

Vercel

Edge deployment platform. Powering our web infrastructure and documentation.

LiveKit

Real-time audio/video infrastructure. Enables low-latency streaming and communication.

Software & Tools

Linux Foundation

Open source kernel development. Contributing to driver compatibility and peripheral support.

Next.js

React framework powering our web applications and documentation.

CUDA

Parallel computing platform. Accelerating AI inference on NVIDIA GPUs.

Join Our Mission

We're always looking for talented researchers, engineers, and collaborators who share our vision of making hardware interfaces intelligent and adaptive.