Believe it or not, depth sensing was just a big a theme as 5G at MWC Shanghai. After hearing Vivo’s TOF 3D announcement and experts’ take on its importance in the realm of xR and biometrics, we came back home convinced companies are going to put a lot of their resources into improving it. The CEO of Lucid, Han Jin, only strengthened that belief when he revealed how cameras can already understand depth as accurately and profoundly as we do.
When Lucid was founded, Jin and his team started by creating robot eyes, focusing on how they should act as cameras. That was the reasoning behind opting to create LucidCam, the first VR180 3D camera, and not a 360-degree one as many competitors chose to do.
“How should a camera capture the world? Like our eyes capture it: peripheral vision with 3D depth. We created it 180-degrees, I don’t see 360, I don’t have eyes in the back”
At Mobile World Congress Shanghai, Han Jin told us the company’s next step is not to produce a LucidCam 2, but to offer the core 3D software tech to device makers everywhere. According to Lucid, their solution is flexible enough to be used with handsets that have a dual camera module, as well as with drones, VR/AR hardware and robots, among others.
At the rate at which the mobile market grows, it’s no wonder they are taking this route.
What makes it attractive to OEMs, though? For one, the solution obviously eliminates the need for extra hardware integration, in turn helping OEMs give customers what they want, which is “smaller but more powerful” devices, as Matt Byrne wisely put it.
Secondly, Lucid’s tech is capable of “merging” with any dual or multi-camera device, inserting what Han Jin calls a “vision profile” during the manufacturing process. This apparently mimics how the brain processes and learns from what humans see.
How good is it? Indoors, the tech shouldn’t give users any trouble, but there are no guarantees if they take it outside, especially after dark: “If you go outdoors and it’s very dark, it becomes challenging. I don’t recommend it in low light.”
Lucid’s solution is based on “pure stereo”, Han emphasizing the fact that the team “found a way to use it for depth by applying AI and machine learning to it.”
Machine learning plays a big part in the processing part, too: “The processing is done now in the cloud but we are shifting to the edge, depending on the kind of chip the camera will support. At first, we worked with a Qualcomm Snapdragon chip and now a lot of them come with a neural engine so we can process on the edge, as well.”
Want more info and specs? Han Jin went into details in the video above, explaining how their tech has the capability of teaching cameras to understand depth as accurately as we do!
Follow TechTheLead on Google News to get the news first.