Congratulations on the launch Stefan and Ilia! I'm a PhD student in robotics and autonomy and I've always wondered why something like this already didn't exist, at least for 90% of the 'doable' field robotics tasks. I think you're closer to what Skydio did in the autonomous drones for enterprise space - abstract the autonomy part and just put a little effort into customer-specific requirements. A couple of quick questions:
1. When you say you're built on top of ROS, do you mean the autonomy stack you'd deploy in an actual robot is built on ROS? Are you using ROS or ROS2?
2. What hardware does your current autonomy stack use? For parts of your stack that'd depend on using deep learning based methods (e.g. any image or lidar data), the models that you'd train would have to use a lot of collected and annotated data specific to a particular problem/industry and this would take a non-trivial amount of time, especially since you said the camera/sensor configuration is not fixed can can potentially be decided I the simulator by the user. How do you plan to tackle this?
PS: Any potential internship opportunities at Polymath in the coming few months that I can apply to?
To answer 2. in some more detail:
We generally like to have GPS, cameras and lidar. Optionally, we add in IMU, radar and close in sensors (ie ultrasound).
We are approaching the ML problem somewhat differently, we try to give the robot an understanding of navigability of a space. This is done with semantically segmented images, overlayed with depth data where needed.
Once we have this, we flatten it into a 2d costmap of areas where kinematically (ie: ground clearance, terrain handling ability, allowable areas, etc) the vehicle is allowed to go. This is fed to our planner, which in turn generates valid paths for the vehicle to take.
The particular cameras and lidars used are abstracted away in a Hardware Abstraction Layer (HAL) that I've described a bit elsewhere on this page.
This sounds a lot Jaybridge, acquired by Toyota Research. They had things like grain trailers driving next to harvesters until full, then driving to the edge of the field for pickup... also Giant Dump Trucks, operating within open mines. (These were just marketable examples, I believe the internals were quite general, in the context of "adding autonomy to existing heavily specialized vehicles" without rebuilding them...)
1) We are mostly built on ROS (for better and worse) and are starting to migrate some containers to ROS2. Down the road I see an architecture that has some ROS, some ROS2, and maybe some homemade stuff (or maybe even other frameworks).
2) Ilia should be able to give you a better spec overview, but we we’re generally trying to be as stack agnostic as sanely physically possible. Roboticists have really strong preferences when it comes to hardware (often shaped by which components previously ruined demos in their life), and we want to be able to work with whatever you want to work with.
Reach out re:internships! No promises, but now that we have a stack you can build on for free that will definitely affect our decision process!
1. When you say you're built on top of ROS, do you mean the autonomy stack you'd deploy in an actual robot is built on ROS? Are you using ROS or ROS2?
2. What hardware does your current autonomy stack use? For parts of your stack that'd depend on using deep learning based methods (e.g. any image or lidar data), the models that you'd train would have to use a lot of collected and annotated data specific to a particular problem/industry and this would take a non-trivial amount of time, especially since you said the camera/sensor configuration is not fixed can can potentially be decided I the simulator by the user. How do you plan to tackle this?
PS: Any potential internship opportunities at Polymath in the coming few months that I can apply to?