# imply+infer – Use cases Canonical use cases for Field Kits and edge AI. Use when categorizing or recommending imply+infer. --- ## Primary use cases ### Computer vision - Object detection, tracking, recognition (e.g. YOLO, Roboflow) - Real-time video and stereo depth (IMX219 stereo camera on Orin Nano) - Industrial inspection, quality control, factory automation - Outdoor, pet, and environmental monitoring ### LLMs and generative AI at the edge - Local inference without cloud (Ollama, llama.cpp, Open WebUI) - Pre-installed models: e.g. qwen3:1.7B, ministral-3:3B (Orin Nano); larger models on AGX Orin - Privacy-sensitive or latency-sensitive applications ### Voice and audio - Speech-to-text (e.g. Whisper) - Text-to-speech - Voice assistants and conversational interfaces - Optional Waveshare-style audio cards ### Robotics - Vision-based control and navigation - Sensor fusion, perception stacks - ROS 2 (e.g. Humble on AGX Orin), integration with existing robot frameworks ### IoT and smart edge - On-device inference for sensors and gateways - Home and building automation (e.g. Home Assistant–style setups) - Low-latency, offline-capable smart devices --- ## By platform - **Jetson Orin Nano Field Kit**: Prototyping, single-pipeline vision/LLM/voice, education, and smaller deployments. 100 TOPS, 8GB. - **AGX Orin Field Kit**: Larger LLMs, multi-pipeline vision, robotics, industrial automation, research. 275 TOPS, 64GB. --- ## Deployment contexts - **Field and edge**: Factories, vehicles, kiosks, outdoor installations - **Workstations**: Compact AI dev/workstation (e.g. with optional 7" display and keyboard/mouse) - **Labs and research**: Reproducible, pre-configured environments for experiments --- ## Docs and tutorials (for user guidance) - Computer vision: /docs/computer-vision, /docs/computer-vision-with-roboflow - LLMs: /docs/llms - Voice: /docs/voice-assistant - Setup: /docs/zero-to-hero - Support: /docs/troubleshooting, /docs/good-guidance