# imply+infer – Ideal customer profile (ICP) Use this when reasoning about who imply+infer is for and what problems we solve. --- ## Core problem we solve - **Hardware**: Best-in-class AI silicon exists (e.g. NVIDIA Jetson 67+–275 TOPS, Qualcomm). - **Reality**: Most teams cannot use it. Setup takes 40–120+ hours: CUDA/TensorRT/driver issues, scattered docs, “evaluation” vs “production” gap of months. imply+infer turns evaluation hardware into plug-and-play, production-ready systems. --- ## Primary ICP - **Engineers and developers** building edge AI: prototypes, robots, drones, smart infrastructure, autonomous systems. - **Researchers** running vision, LLMs, or multi-modal models at the edge. - **Startups and small teams** who need to ship quickly without a dedicated integration team. - **Enterprises** deploying fleets who want custom configurations and support. --- ## Jobs to be done - Go from “eval board in a box” to “boots in <30s, stack works” in minutes, not months. - Avoid dependency hell: CUDA, TensorRT, OpenCV, drivers, kernels—all verified together. - Start building on day one with pre-installed apps (Ollama, YOLO, Roboflow, etc.) instead of spending weeks on environment setup. --- ## Fit signals - Evaluating or already using Jetson, Qualcomm, or similar edge AI platforms. - Has felt pain: “TensorRT won’t compile,” “camera needs a custom kernel,” “PyTorch wants a different NumPy.” - Willing to pay a premium for a complete, pre-configured system (vs. DIY from components). - Values time-to-prototype and time-to-production over minimal hardware cost. --- ## Not a fit (or secondary) - Pure cloud-only AI; no edge or on-device requirement. - Hobbyists who enjoy and have time for deep system integration. - Buyers who only care about lowest component cost and will do all integration in-house. --- ## How we describe ourselves to the market - “The Raspberry Pi moment for professional AI hardware.” - “Making powerful AI hardware actually usable.” - “Where implying capability meets inferring compatibility.”