Please help me building a cloud visual SLAM system for cellphones
Originally published on Dev Community
Hello hackers, tinkers, webdevs, sysdevs, roboticists, and all coders! I’ve been excited about cloud robotics, a field of robotics that utilizes the power of cloud computing, and want to share the excitement with you and suggest a project we can potentially work together. The project that I’m thinking of is “cellphone visual SLAMing”. The idea is to run a visual SLAM system on cloud so mobile devices like a cellphone can build 3D maps by simply uploading camera data to the cloud.
Here are the steps I’m thinking:
- Try creating a 3D map using ORB_SLAM2 and desktop camera images. The main goal of this step is to get comfortable with a visual SLAM library and feel out the limitations.
- Try creating 3D maps using ORB*SLAM2 running on a desktop and cellphone camera images.
ORB_SLAM2 supports ROS. So one can easily capture device camera images using HTML5’s
MediaDevices.getUserMedia()
, turn them into ROS image messages, and publish them using roslibjs so ORB_SLAM2 can use the images collected from a remote device. - Run the ORB_SLAM2 to cloud. I have not tried it, but it seems like it is fairly easy to containerize a ROS package and deploy it on cloud.
That’s it! Are you interested in trying this idea out? If you have experiences with visual SLAM and have suggestions? Let me know, I’d love to hear your thoughts.
Updates
- 2021/01/02 I have moved on as I don’t get to spend time on tinkering but still think this is a fun project to try one day.
- 2020/11/23 Fyusion and CANVAS seem to provide products with related technologies.
- 2020/05/02 It seems like se2lam could be used instead of ORB_SLAM2.