
Mobile robots and autonomous forklifts are already improving efficiency and safety in warehouses, and the latest generation of machine vision and artificial intelligence promises further optimization. Control One Logistics Pvt. Ltd. has developed vision-based robots using agentic AI to move materials with less overhead.
Unlike traditional automated systems that rely on rigid scripting, QR codes, or lidar, Control One claimed that its proprietary One OS can transform utility vehicles into smart agents. These agents can operate fully autonomously and adapt to dynamic environments without needing infrastructure changes, said the Bengaluru, India-based company.
Control One added that its One Command station integrates with warehouse management systems (WMS) and even supports natural-language task assignments. The dashboard “enables coordinated, high-efficiency fleet operations without cloud dependency,” it asserted.
Darshan S., head of growth at Control One, spoke with Automated Warehouse about the company’s AI– and vision-based approach to warehouse robotics.
Control One works to ease automation adoption
How did Control One decide to go with a purely vision-based approach to autonomy?
Darshan: Our CEO has been in robotics since 2012, working with AGVs [automated guided vehicles] and RaaS [robotics as a service]. In 2015, he started a training model for kids at 107 physical centers, and then COVID-19 hit.
When NVIDIA brought inferencing to the edge, he wanted to bring agentic AI into the warehouse, running completely on cameras like Tesla. We’re focused on the application layer more than the technology and have partnered with OEMs for pallet jacks.
What customer need does this respond to?
Darshan: We’re solving two critical challenges faced by today’s industrial and logistics operations: skilled labor shortages and inflexibility of existing automation.
Warehouses and factories struggle to find trained operators for repetitive, vehicle-based tasks. Our AI agents eliminate that need by taking over the driving and decision-making responsibilities entirely.
Many existing robotic solutions rely on lidar or QR codes and require infrastructure modifications. They often fail in dynamic environments where layouts and objects change frequently.
Our agentic, vision-based system needs no changes to existing infrastructure and adapts to environmental changes in real time. This makes it far more robust, scalable, and cost-effective than older automation approaches.
Agentic AI is hardware-agnostic
Are the cameras feeding One OS stationary, on the robots, or both?
Darshan: The cameras are fixed onboard the robots — they are not stationary in the environment. Each robot is equipped with its own vision system, enabling it to perceive and navigate the environment autonomously without relying on external infrastructure.
Our AI sits on any machine. We’re looking at solar panel cleaning next, as well as cleaning and maintenance, agriculture, construction, and food delivery.
How is Control One Logistics’ agentic AI different from existing fleet management software or AI?
Darshan: Our differentiator lies in the autonomy of the individual vehicle. Control One’s One OS — an AI driver — sits directly on the utility vehicle. It independently perceives, navigates, and completes tasks, turning traditional utility equipment into fully agentic autonomous agents.
Complementing this is our One Command station, which integrates seamlessly with existing WMS platforms. It pulls pick lists and assigns tasks intelligently across the AI agents, ensuring optimal load balancing and minimal downtime.
Key highlights include:
● Voice-command driven: A human can say, “Pick the pallet from R3 fourth bin and drop off at R8 third bin,” and One Command understands and assigns the task accordingly. Users can verbally query the robot on what it sees, which can be helpful for finding or slowly moving fragile items like a TV.
● All on the edge: There’s no cloud dependency — everything runs on local edge compute, ensuring maximum data security and real-time response. Whichever robot is closest gets automatically assigned, without relying on the cloud, which can pose cybersecurity and latency challenges.
● True agentic behavior: Each vehicle operates as its own intelligent agent, so there’s no need for centralized micromanagement. It’s continually learning and retrains itself during downtime.
This architecture transforms fleet coordination from rigid scripting to autonomous decision-making.

Vision offers some advantages over lidar
How long has this technology been in development and testing?
Darshan: It took six to eight months of development time to MVP, then 12+ months across manufacturing and 3PL [third-party logistics] clients.
Customer feedback has been strong, with clients that previously struggled with pre-programmed robots now seeing smoother autonomous operations without the overhead.
Control One has tested its system in a 16,000 sq. ft. (1,486.4 sq. m) facility in Bangalore, and it can inference in different lighting conditions, such as daytime or under lights at night. It took us only one and a half weeks to deploy our systems in India because training data and R&D cost less in India.
What are some of the benefits of this approach in comparison with established QR codes or lidar?
Darshan: Our camera-based system offers multiple advantages:
- No infrastructure changes: QR code and lidar systems often require fixed markers, reflectors, or infrastructure tweaks. Ours doesn’t. The system visually understands aisles, junctions, and objects on its own.
- Adaptability: Infrastructure changes often require reprogramming or retraining in QR/lidar systems. Control One’s systems adapt in real time — there’s no need to reconfigure.
- Cost-efficiency: Cameras can significantly reduce BOM [bill of materials] cost compared with lidar, giving us a strong pricing edge.
- Contextual awareness: Because the system sees the environment, it understands context. For example, if a pallet contains fragile materials, the robot automatically slows down. We implemented this feature based on real client feedback.
We don’t have data from wheel odometry and radar, but these systems will still have safety lidars. Robots can fully depend on vision, and we can put our brain in different form factors, from pallet jacks and forklifts to goods-to-person [G2P] robots.

Control One preps to scale from India to the U.S.
Where are your enterprise clients to date? What types of facilities have tested this technology so far?
Darshan: Control One currently works with three enterprise clients in India: Roots MultiClean and SKS CleanTech for manufacturing, and Delhivery for 3PL and logistics. We’ve quietly moved more than 25,000 tons of material across their warehouses.
These clients operate in diverse real-world environments and have helped us rigorously validate and refine our technology. We’re now moving to the U.S. for initial pilot deployments.
Are you raising funding in order to scale in the U.S.? How do tariffs affect you?
Darshan: Yes, we’re currently raising our seed round to support U.S. expansion. Most of our $570,000 in pre-seed funding came from U.S.-based investors. They included veterans from Tesla, Amazon, NVIDIA, and others in the supply chain and robotics ecosystem such as iRobot co-founder Helen Grenier.
Customers want to implement at scale, so to overcome their uncertainty about complexity in adoption, our startup is showing its successes in India. We’re offering full-stack RaaS on a monthly subscription basis — we sell outcomes, not just machines.
Regarding tariffs, exposure is negligible at our current scale. We’re in talks with a couple of OEMs and system integrators in the U.S. and expect to pilot in the U.S. early next year.
