Skip to content

Alan Yates' Hardware Comments

greasysock edited this page Mar 28, 2016 · 6 revisions

Overview

Hardware

Lighthouse Receiver
Hardware Component(s)* Yates Mentions Equivalent Component Recommendations
MCU (Microcontroller Unit) NXP LPC11U3x series MCU [Arduino Zero] (https://www.arduino.cc/en/Main/ArduinoBoardZero) - Below Specs
FPGA (Field Programmable Gate Array) Lattice ICE40 series FPGA Mojo V3 - Marginally Exceeds Specs
Photodiode BPW34 BPW34
Amplifier frontend 2N3904 or 2N3906 TODO
IMU (Inertial Measurement Unit) Invensense 6K series IMU Kootek GY-521 MPU-6050
Comparator SOT23-5 SOT23-5
  • All Components listed here are not official.

References

  1. https://web.archive.org/web/20160326172134/https://www.reddit.com/r/Vive/comments/4b3m0o/making_sensors_to_work_with_lighthouse/d15zqak

The carrier frequency is a fair bit faster than that, around 2 MHz and will only get higher in the future. The analogue front-end amplifier can be implemented with a few transistors, just 2N3904 and 2N3906 are perfectly capable. The TIA is a little unconventional, it took me a couple of days to make something that simple work well, most people squint at it and go "WTF is that"? It isn't the lowest input referred noise thing I came up with, but it deals with the largish capacitance of the BPW34 using a very simple bootstrapping trick and is very cheap compared to the textbook op-amp circuits with insanely huge UGBs that cost hundreds of times more. I am fairly happy with the current discrete sensor design, but there are a few things I would improve next time; I could probably save a transistor and some passives in the envelope detector and the RF amp could be direct coupled to save at least one cap, maybe a resistor or two as well. I'd also improve the dark/ambient current compliance of the front end because I did one nutty thing to save a cap there that I probably should not have.

  1. https://web.archive.org/web/20160326172510/https://www.reddit.com/r/Vive/comments/465lqw/lighthouse_sensor_module_designs/d02sbps

The basic hardware for tracking using current generation Lighthouse is pretty basic; an USB-capable MCU such as the NXP LPC11U3x series, a high quality six-axis IMU like an Invensense 6K series, a small FPGA such as a Lattice ICE40 series and the BPW34 photodiodes and their sensor amplifiers (which can be implemented with six transistors and a moderately fast comparator, or other ways). Add a Nordic radio and a battery management subsystem if you want it wireless. The hardware is not really the issue, I could post the CAD tomorrow and you could make your own. If you copied it exactly you could flash our firmware on it, upload the configuration file for your sensor constellation to it and it would probably track. But currently there is no SteamVR support for a class of custom tracked objects. Right now you would need to implement enough of a controller to keep the driver happy despite that being superfluous for a tracking-only object, and it would be indistinguishable from a controller from an API point of view, which would be difficult to use at the application level. Given time that will be fixed. For now strapping controllers to things isn't ideal, but it is a way to prototype and you at least know the object is calibrated and tracks properly.

  1. https://web.archive.org/web/20160326172628/https://twitter.com/vk2zay/status/711403660438208512

The prototype of the version 7 lighthouse sensor. I just can't bear to take it apart.

  1. https://web.archive.org/web/20160326173143/https://www.reddit.com/r/Vive/comments/429ce4/alan_yates_on_twitter_even_the_discrete/cz9jpua

Yeah looking at the SAMD21 datasheet it could probably handle enough sensors to make a minimal tracked object. There are versions of the D21 with USB as well. Basically any micro or FPGA that can capture both edges of a pulse signal with a resolution of about 10-50 ns will do the job. The only tricky part is having the capability to do that in parallel for N sensors and having enough grunt to at least ship the resulting data of 200-500 pulses per channel per second somewhere else (or process it in real time).

Yeah, mostly visibility. When you design a tracked object you need to consider the field of view of each sensor (about 120-140 degrees for a typical bare photodiode sensor which is basically a planar device with Lambertian response), so the position and orientation of each sensor is important for getting optimal coverage with a given number of sensors. One thing to consider when sizing the receiver system is that for an N sensor object you may get N sync pulses in some environments because even shadowed sensors can see sync scattering off the walls. Similarly you may get more than visible-N laser pulses per sweep because of reflections, so you need some extra capacity or a method to perform data reduction as early as possible if you are trying to minimise the load on the downstream subsystems. To bootstrap full-pose tracking you need 5 sensors visible to one base (or 4 with an IMU), so that is a constraint too. One incredibly smart chap here wrote an amazing tool which can take arbitrary geometry as an STL file and automatically place sensors on it, modelling the optimised results using the actual system parameters (noise, sensor field of view, etc). We use this to design tracked objects and get reliable results in the real world when we build an implementation of them. This is far more robust than having humans try to design sensor placements by intuition alone, but it can also model human placements or tweaked placements. We will document and release this tool at some point. The number of sensors you need to "track" an object varies with its shape, for example a toroidal object might need 22 sensors for bootstrapping in any orientation and good tracking baselines. Spheres would need about 28. A minimal full-pose tracked object would need at least 4 + IMU, but such sensors would need to be omnidirectional to offer 4pi steradian coverage which is basically impossible so a few extra would be required to deal with any practical object's self occlusion. If there are some orientations you don't care about you can use less of course. The most minimal receiver possible is obviously one sensor which again would need to be designed to have almost 4pi steradian coverage and would not give full pose, only position (an IMU might be used for orientation with respect to local gravity). This would only work with two base visibility (technically three beam hits or one and a half bases as each base has two rotors, although it could dead-reckon over small periods of occlusions of less than three rotors with an IMU). Because it couldn't bootstrap itself (it has no idea of scale because it is a single point) it would require knowledge of the relative poses of the bases acquired by some other method like an almanac sent over radio from a PC or via the sync pulses of the bases themselves if they were taught how to compute each other's poses, or from the bootstrap solution of another higher sensor count object. An object with two near omnidirectional sensors a known distance apart can solve the entire system and bootstrap, but needs to be moved around in the volume to collect enough information to do it, and it has a rotational ambiguity in the navigation about the line connecting the sensors that can't be resolved without an additional sensor, but that may not matter if you have an IMU and/or a magnetometer mounted off the intersensor line or just don't care about orientation.

  1. https://web.archive.org/web/20160326174715/https://www.reddit.com/r/oculus/comments/3a879f/how_the_vive_tracks_positions/csaffaa

1st generation ASICs are analog front-end management. The chipset solution for a Lighthouse receiver is currently N*(PD + ASIC) -> FPGA -> MCU <- IMU. Presently the pose computation is done on the host PC, the MCU is just managing the IMU and FPGA data streams and sending them over radio or USB. A stand-alone embeddable solver is a medium term priority and if Lighthouse is adopted will likely become the standard configuration. There are currently some advantages to doing the solve on the PC, in particular the renderer can ask the Kalman filter directly for predictions instead of having another layer of prediction. It also means the complete system can use global information available to all objects the PC application cares about, for example the solver for a particular tracked object can know about Lighthouses it hasn't seen yet, but another device has. Longer term I expect the FPGA & MCU to be collapsed into a single ASIC. Right now having a small FPGA and MCU lets us continue improving the system before committing it to silicon. For your quadcopter application you may not even need the FPGA, if you have an MCU with enough timing resources for the number of sensors you are using (also depends upon the operating mode of Lighthouse you pick, some are easier to do with just an MCU, the more advanced ones need high speed logic that basically needs an FPGA). The sensor count could be very low, maybe even just one if you are managing the craft attitude with the IMU and can be seen from two base stations at once.

  1. https://web.archive.org/web/20160326174850/https://www.reddit.com/r/oculus/comments/3a879f/how_the_vive_tracks_positions/csaf0j0

Do the maths: With the current receiver architecture the angular resolution is about 8 microradians theoretical at 60Hz sweeps. The measured repeatability is about 65 microradians 1-sigma on a bad day, frequently a lot better... This means centroid measurement is better than say 300 micron at 5 metres, but like all triangulating systems the recovered pose error is very dependent upon the object baseline and the pose itself. The worst error is in the direction in the line between the base station and object as this range measurement is recovered essentially from "angular size" subtended at the base station. Locally Lighthouse measurements are statistically very Gaussian and well behaved so Kalman filtering works very well with it. Globally there can be smooth distortions in the metric space from imperfections in the base stations and sensor constellation positions, but factory calibration corrects them (much the same as camera/lens calibration does for CV-based systems). Of course with two base stations visible concurrently and in positions were there is little geometric dilution of precision you can get very good fixes as each station constrains the range error of the other.

  1. https://web.archive.org/web/20160326175041/https://www.reddit.com/r/oculus/comments/3a879f/how_the_vive_tracks_positions/csa73mj

When tracking with two base stations in TDM mode they take turns and there are 4 flashes per spin, the two base stations timebases are offset about 400 microseconds so their syncs won't collide in phase space. You could use one sync flash per universe, but there is a flash for each rotor to allow precision compensation for any phase jitter.

https://web.archive.org/web/20160326175816/https://www.reddit.com/r/oculus/comments/3a879f/how_the_vive_tracks_positions/csa6pzz?context=3

There are actually two sync flashes per spin. It goes flash-sweep, flash-sweep.

Clone this wiki locally