Apples new TrueDepth camera for the iPhone X sure is impressive but it also sounds familiar. It works by using a projector to cast 30,000 dots on your face, which it then reads with an infrared camera. This sounds much like how the Microsoft Kinect works.
The dots used in structured light 3D vision
The Kinect uses several approaches to get a good 3D image. One is called structured light the projection of a known pattern of dots and using machine learning to reconstruct a 3D scene. This is done with a dot projector and an IR camera, same as on the iPhone X.
But a regular color camera is also used to do something called depth from stereo. Basically, it uses depth of field to guess how far things are (things that are further away or closer than the focus distance are blurry).
Components of the Microsoft Kinect
Apple doesnt explicitly say if it uses the color image sensor, but we think that it does thats why the flood illuminator is there, to let the color camera see in the dark (the IR camera already has the dot projector).
There are some additional tricks used by the Kinect. Its lens is astigmatic meaning it has a different focal length horizontally and vertically. This gives it two readings per pixel, but would impact the image quality.
The TrueDepth camera of the iPhone X has the same components
Anyway, all this data is passed on to a machine learning system that has been trained on thousands of examples of body positions in the case of the Kinect and of facial expressions in the case of the iPhone X.
You can check out this slideshow if you want to learn more about how the Kinect implementation works.