He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Good explanation. One detail though: it is one pixel at a time, not one line at a time. Basically does the whole sequence for one pixel, adjusts mirror to next one, and does it again. The explanation is around the 8 minutes mark.
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
It's humbling how well-rounded Brian (and other Youtubers such as Applied Science and StuffMadeHere, HuygensOptics) is on top of clearly being a skillful physicist: electronics, coding, manufacturing, ... and the guy is _young_ compared to the seasoned professionals I mentioned in the parentheses.
And the reason it matters that this is a single pixel at two billion times per second is that we can hypothetically stack many of these assemblies on top of each other and get video of a single event that is not repeatable.
What you've invented there is a camera sensor :) Silicon photomultipliers do exist and are used in some LIDAR applications. The bigger problem would be creating the 921600-channel oscilloscope to capture all this raw data.
The author explained that he originally attempted to pulse the laser at 30 KHz, but for the actual experiment used a slower rate of 3 KHz. The rate at which the digital data can be read out from the oscilloscope to the computer seems to be the main bottleneck limiting the throughput of the system.
Overall, recording one frame took approximately an hour.
Thanks for the explanation. Honestly, your explanation is better than the entire video. - I watched it in full and got really confused. I completely missed the part where he said the light is pulsing at 30kHZ and was really puzzled at how he is able to move the mirror so fast to cover the entire scene.
Huh. I watched a lot, but not all, of the video, and I thought he made it clear early on that he was stitching together 1px videos & repeating the event for each pixel (about a million times for that 720p result)
The author uses "real time sampling" to acquire evolution of light intensity for one pixel at 2 GSps rate. The signal is collected for approximately one microsecond at each firing of the laser, and corresponding digital data is sent from the oscilloscope to the computer.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
I think parent meant that the image construction technique is analogous to equivalent time sampling. You’re correct in the mode of the oscilloscope’s use. However, the mode of the larger system is using a repetitive signal and sliding sampling points across it.
It’s super cool that AlphaPhoenix is able to get comparable results on his garage. These academic versions use huge lab bench optic setups. They wind up with technically higher quality results, but AlphaPhonix’s video is more compelling.
2. Increase the precision of the master clock. There's some time smearing along the beam. It's not that hard to make clocks with nanosecond resolution, and picosecond resolution is possible, although it's a bit of a project.
3. As others have said, time-averaging multiple runs would reduce the background noise.
At the point where the light is getting reflected on the mirror, it is unfocused - those galvos look too small. But a pair of larger mirrors in the same arrangement could work.
The triggering scheme is completely brilliant. One of those cases where not knowing too much made it possible, because someone who does analog debug would never do that (because they would have a 50k$ scope!.
Honestly I think if we each wrote a nice personal letter to Keysight they’d probably gift him one in exchange for the YouTube publicity. Several other electrical engineers on YT get free $20-50k keysight scopes not just for themselves, but once a year or so to give away to their audience members.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
Keysight is not very hobbyist friendly these days. A year or two ago it broke on the Eevblog forums that they were refusing to honor warranties/service contracts unless you had a corporate account with them. If you were just a guy with a scope you would be SOL.
What this experiment does is very similar to how an ordinary LIDAR unit operates, except that during a LIDAR scan the laser and the receiver are always pointed in the same direction, while in this demonstration the detector is scanning the room while the laser is stationary and is firing across the room.
But in principle, a LIDAR could be reconfigured for the purposes of such demonstration.
If one wants to build the circuit from scratch, then specifically for such applications there exist very inexpensive time-to-digital converter chips. For example, Texas Instruments TDC7200 costs just a few dollars and has time uncertainty of some tens of picoseconds.
And "Flash LIDAR" captures every pixel all at once. (But the frame rate is limited by how quickly the buffer can be read out and the sensor readied for the next frame.)
Hmm, it's a clever hack, but they would use an oscilloscope with an "External trigger" input, like most of the older Rigols. That would let you use the full sample rate without needing to trigger from CH2
Even modern entry level Rigol scopes have external trigger inputs. I've got like one step up from the cheapest model and it has an external trigger input. I think the idea there is that you'd use a bunch of these scopes for the QA on an assembly line. there's a bunch of pass/fail features I've never once touched too.
The view from one end of a laser going between two mirrors (timestamp 1:37) is a fairly good demonstration of the camera having to wait for light to get to it.
The video is definitely more interesting than 28 fps but it's also not really 2B fps.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
Nah, it's definitely 2B fps, the frames are just 1x1 wide and a lot of the interesting output comes from the careful synchronization, camera pointing, and compositing of nearly a million 1x1 videos of effectively identical events.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
Others say that you're wrong, but I think you're describing it approximately perfectly.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
> Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images
What is your definition of "video frame" if not this?
> that can be as small as one pixel,"
Why would this be a criteria on the images? If it is, what is the minimum resolution to count as a video frame? Must I have at least two pixels for some reason? Four so that I have a grid? These seem like weird constraints to try and attach to the definition when they don't enable anything that the 1x1 camera doesn't - nor are the meaningfully harder to build devices that capture.
I agree the final result presented to the viewer is a composite... but it seems to me that it's a composite of a million videos.
I concur, and as you say it comes from a video frame and thus a video. The fact that the video frame contains only a single one seems to change nothing.
If I were to agree with this, then would you be willing to agree that the single-pixel ambient light sensor adorning many pocket supercomputers is a camera?
And that recording a series of samples from this sensor would result in a video?
If there is no lower bound on the size of the image that constitutes a frame, then: Please find the following pictorial summation of my thoughts on this matter to be a sufficient response to your question.
I would probably call it 2 billion fps* (with an asterisk), with a footnote to explain how it’s different than video is typically captured. Especially the fact that the resulting video is almost a million discrete events composited together as opposed to a single event. All of which the video is transparent about, and the methodology actually makes the result more interesting IMO.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.
A frame is by definition flexible in how many pixels tall or how many pixels wide it is, and there is nothing in the definition that says it can't be 1x1.
Each pixel was captured at 2 billion frames per second, even if techinically they were separate events. Why not call it (FPS / pixels) frames per second?
I thought his method of multiplexing the single channel was very smart. I guess it's more common on 2 channel or high end 4 channel scopes to have a dedicated trigger input, which I've checked this one doesn't have. That said, there're digital inputs that could've been used. Presumably from whatever was controlling the laser.
I was confused by that part of the video exactly because I wondered why he wasn’t using the trigger input. Or, would it normally be possible to use a different channel as the trigger for the first channel?
He explained that. His inexpensive oscilloscope _can_ trigger from the second channel, but only at one billion samples per second. Where’s the fun in that?
I guess I'm used to it. My main scope is an SDS1204 which doesn't have one (and when I inherited it the digital channels were reportedly blown up) despite being fairly capable for its combination of age and price
As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
Would an upgraded version of this that was actually capable of capturing the progress of a single laser pulse through the smoke be a way of getting around the one-way speed of light limitation [0]? It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
But it's been a while since I read an explanation for why we have the one-way limitation in the first place, so I could be forgetting something.
No, as he explains in the video, this is not a stroboscopic technique, the camera _does_ capture at 2 billion fps. But it is only a single pixel! He actually scans the scene horizontally then vertically and sends a pulse then captures pixel by pixel.
>As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
It is not different phases, but it is a composite! On his second channel he describes the process[0]. Basically, it's a photomultiplier tube (PMT) attached to a precise motion control rig and a 2B sample/second oscilloscope. So he ends up capturing the actual signal from the PMT over that timespan at a resolution of 2B samples/s, and then repeating the experiment for the next pixel over. Then after some DSP and mosaicing, you get the video.
>It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
The point here isn't to measure the speed of light, and my general response when someone asks "can I get around physics with this trick" by answer is no. But I'd be lying if I said I totally understood your question.
No, you cannot escape the conclusion of the limitations on measuring the one-way speed of light.
While the video doesn't touch on this explicitly, the discussion of the different path lengths around 25:00 in is about the trigonometric effect of the different distances of the beam from the camera. Needing to worry about that is the same grappling with the limitation on the one-way speed.
Think of it more like "IRL raytracing", where a ray (the beam) is cast and the result for a single pixel from the point of view is captured, and then it is repeated millions of times.
Even if you had a clock and camera for every pixel, the sync is dependent on the path of the signal taken. Even if you sent a signal along every possible route and had a clock for each route for each pixel (a dizzingly large number) it still isn't clear that this would represent a single inertial frame. As I understand it even if you used quantum entanglement for sync, the path of the measurement would still be an issue. I suggest not thinking about this at all, it seems like an effective way to go mad https://arxiv.org/pdf/gr-qc/0202031
E: Do not trust my math under any circumstances but I believe the number of signal paths would be something like 10^873,555? That's a disgustingly large number. This would reveal whether the system is in a single inertial frame (consistency around loops), but it does not automatically imply a single inertial frame. It's easy to forget that the earth, galaxy, etc are also still rotating while this happens.
The problem (ignoring quantum mechanics) is that the sensors all require an EM field to operate in. So assuming that the speed of light was weighted with a vector in space-time, it would be affected everywhere -- including in the measurement apparatus.
If on the other hand one could detect a photon by sending out a different field, maybe a gravitational wave instead... well it might work, but the gravitational wave might be affected in exactly the same way that the EM field is affected.
This video has brough warm and fuzzy memories from my other life. When I was a scientist back in USSR my research subject required measuring ridiculously low amounts of light and I used photomultiplier tube in photon counting mode for that. I needed current preamp that can amplify nanosecond long pulses and have concocted one out of arsenide-gallium logic elements pushed to work in a linear mode. The tube was cooled by Peltier elements and data fed to a remote Soviet relative of Wang computer [0].
It would look something like this[1] except with slower visual propagation.
Note that this camera (like any camera) cannot observe photons as they pass through the slits -- it can only record photons once they've bounced off the main path. Thus you will never record the interference-causing photons mid-flight, and you'll get a standard interference pattern.
AlphaPhoenix mentions in the description that he wants to try and image an interference pattern, and it seems possible.
Though it wouldn't really be showing you the quantum effect; that's only proven with individual photons at a time. This technique sends a "big" pulse of light relying on some of it being diffusely reflected to the camera at each oscilloscope timestep.
Truly sending individual photons and measuring them is likely impractical as you'd have to wait for a huge time collecting data for each pixel, just hoping the photons happens to bounce directly into the photomultiplier tube.
No. It's strictly because of travel time from the "observed spot" to the camera. He explains it in the back half of the video, but for his setup (and every camera everywhere, including your eyes!), it doesn't matter just how long light took to get to the point where it scatters, but also how long it takes for it to get from that point to the detector. If the camera is placed far enough to the side, that time delay is close enough to identical for all samples, but if the camera is in line with the laser, there is a substantial time difference for light scattering close to the camera to light scattering near the far wall.
He actually explains it. It's due to the round trip time increasing from further away water droplets when filmed at an oblique angle. Light hitting a droplet 3m away has a 3x times longer round trip than light hitting a droplet 1m away.
Yes, the laser was fired at 3 KHz, while the mirrors were slowly scanning across the room.
For each laser pulse, one microsecond of the received signal was digitized with the sample rate of 2 billion samples per second, producing a vector of light intensity indexed by time.
A large number of vectors were stored, each tagged by the pixel XY coordinates which were read out from the mirror position encoders. In post-processing, this accumulated 3D block of numbers was sliced time-wise into 2D frames, making the sequence of frames for the clip.
Shere are so many levels this could be answered at.
All light in a narrow cone extending from the camera gets recorded to one pixel, entirely independently from other pixels. There's no reason this would be blurry. Blur is an artifact created by lenses when multiple pixels interact.
There is a lens in the apparatus, which is used to project the image from the mirror onto the pinhole, but it is configured so the plane of the laser is in focus at the pinhole.
What I don't understand is how the projection remains in focus as the mirror sweeps to the side, but perhaps the change in focus is so small.
It’s in focus because only a small angular area from the scene contributes to the light that reaches the sensor. You get blur when sensor elements (individual pixels) receive light from too wide of an area.
They did the way more expensive version briefly mentioned towards the start of the video, having 12+ cameras with ridiculously fast shutters (as low as 10 nanoseconds) arranged to run in seqeunce.
The rapatronic camera had an incredibly fast electronic shutter. To record a video they needed one camera per frame. Rather like "bullet time" in the movies. The technique in the youtube video is completely different.
It's not completely different. I'd argue it's the exact opposite. Instead of using a single single-pixel camera to record video of a repeatable event, a sequence of regular film cameras captured photographs of an unrepeatable event.
He did a good job on his setup, but I have to think that adding a spinning mirror would have made everything much faster and easier.
He could then capture an entire line quite quickly, and would only need a 1 dimensional janky mirror setup to handle the other axis. And his resolution in the rotating axis is limited only by how quickly he can pulse the laser.
Of course, his janky mirror setup could have been 2 off-the-shelf galvos, but I guess that isn't as much "content".
I think a spinning mirror would make it a lot harder. He's only moving the mirror after the "animation" finishes. So it's capture video, step by 1 pixel, capture video, step by 1 pixel, capture video, etc... He's replaying the scene ~1 million times, for 1 million unique single pixel 2 billion fps videos.
Basically the constantly spinning motor would remove the complexity and slowdown of accelerating and decelerating it at the start and end of the motion. To capture a line, he's already scanning it across the scene and triggering the laser at specific points (when the encoder in the servo reaches certain values), if it was continuously rotating he could just do the same thing.
Though I don't think it would speed things up much, from what he was saying in one of the appendix videos on his second channel he doesn't do things like triggering the laser multiple times for each pixel to reduce noise because the bottleneck is copying the data off of the scope and it would stretch from hours to days to run.
Yeah, spinning at a constant rate with an encoder for triggering would probably be a bit more consistent. But potentially more of a mechanical headache. And he does need a pretty big mirror due to being limited by the amount of light he can focus on the sensor, I'm not sure that there are galvos available with such a large area (especially for a reasonable price).
Tl:dw for how this works:
He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Good explanation. One detail though: it is one pixel at a time, not one line at a time. Basically does the whole sequence for one pixel, adjusts mirror to next one, and does it again. The explanation is around the 8 minutes mark.
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
It's humbling how well-rounded Brian (and other Youtubers such as Applied Science and StuffMadeHere, HuygensOptics) is on top of clearly being a skillful physicist: electronics, coding, manufacturing, ... and the guy is _young_ compared to the seasoned professionals I mentioned in the parentheses.
You should check the other channel by the same person, where he goes into more details about the system: https://www.youtube.com/@BetaPhoenixChannel
From what I remember, recording one frame took about an hour.
Yea, he’s recording several thousand samples per pixel. That’s how it becomes a video instead of a snapshot.
Check out his previous video <https://www.youtube.com/watch?v=IaXdSGkh8Ww> for more details about that part.
And the reason it matters that this is a single pixel at two billion times per second is that we can hypothetically stack many of these assemblies on top of each other and get video of a single event that is not repeatable.
What you've invented there is a camera sensor :) Silicon photomultipliers do exist and are used in some LIDAR applications. The bigger problem would be creating the 921600-channel oscilloscope to capture all this raw data.
The author explained that he originally attempted to pulse the laser at 30 KHz, but for the actual experiment used a slower rate of 3 KHz. The rate at which the digital data can be read out from the oscilloscope to the computer seems to be the main bottleneck limiting the throughput of the system.
Overall, recording one frame took approximately an hour.
Thanks for the explanation. Honestly, your explanation is better than the entire video. - I watched it in full and got really confused. I completely missed the part where he said the light is pulsing at 30kHZ and was really puzzled at how he is able to move the mirror so fast to cover the entire scene.
Huh. I watched a lot, but not all, of the video, and I thought he made it clear early on that he was stitching together 1px videos & repeating the event for each pixel (about a million times for that 720p result)
FWIW he explains it better in his earlier video about the original setup. He might be assuming people have seen that.
Can we not find why the strange behavior of a double split experiment occurs using this setup?
No, all the light seen as in "beams" is scattered off fog. That scattering is a measurement from the perspective of QM.
Yup, this technique also allows oscilloscope capture signal with frequency higher than their Nyquyst bandwidth.
The downside is it only works with repeative signal.
I believe this technique is known as "equivalent-time sampling".
The author uses "real time sampling" to acquire evolution of light intensity for one pixel at 2 GSps rate. The signal is collected for approximately one microsecond at each firing of the laser, and corresponding digital data is sent from the oscilloscope to the computer.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
https://www.tek.com/en/documents/application-note/real-time-...
I think parent meant that the image construction technique is analogous to equivalent time sampling. You’re correct in the mode of the oscilloscope’s use. However, the mode of the larger system is using a repetitive signal and sliding sampling points across it.
The original MIT video from 2011: "Visualizing video at the speed of light — one trillion frames per second" https://youtu.be/EtsXgODHMWk (project site: https://web.media.mit.edu/~raskar/trillionfps/)
He mentions this as the inspiration in his previous video (https://youtu.be/IaXdSGkh8Ww).
The slowmo guys did a video about a similar setup at CalTech https://youtu.be/7Ys_yKGNFRQ
It’s super cool that AlphaPhoenix is able to get comparable results on his garage. These academic versions use huge lab bench optic setups. They wind up with technically higher quality results, but AlphaPhonix’s video is more compelling.
Insanely clever and impressive.
Some possible improvements.
1. Replace the big heavy mirror with a pair of laser galvos. They're literally designed for this and will be much faster and more precise.
Example:
https://miyalaser.com/products/miya-40k-high-performance-las...
2. Increase the precision of the master clock. There's some time smearing along the beam. It's not that hard to make clocks with nanosecond resolution, and picosecond resolution is possible, although it's a bit of a project.
3. As others have said, time-averaging multiple runs would reduce the background noise.
At the point where the light is getting reflected on the mirror, it is unfocused - those galvos look too small. But a pair of larger mirrors in the same arrangement could work.
The triggering scheme is completely brilliant. One of those cases where not knowing too much made it possible, because someone who does analog debug would never do that (because they would have a 50k$ scope!.
Does anyone have a $50,000 scope they could just give to this dude? He seems like he would make great use of it.
Honestly I think if we each wrote a nice personal letter to Keysight they’d probably gift him one in exchange for the YouTube publicity. Several other electrical engineers on YT get free $20-50k keysight scopes not just for themselves, but once a year or so to give away to their audience members.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
Keysight is not very hobbyist friendly these days. A year or two ago it broke on the Eevblog forums that they were refusing to honor warranties/service contracts unless you had a corporate account with them. If you were just a guy with a scope you would be SOL.
Do you have any better recommendations?
They’re not one of the big 3, but Rigol are pretty much the only scope co actively marketing models to hobbyists / individuals: https://www.rigol.com/europe/news/company-news/mho900-releas...
Also Siglent.
What this experiment does is very similar to how an ordinary LIDAR unit operates, except that during a LIDAR scan the laser and the receiver are always pointed in the same direction, while in this demonstration the detector is scanning the room while the laser is stationary and is firing across the room.
But in principle, a LIDAR could be reconfigured for the purposes of such demonstration.
If one wants to build the circuit from scratch, then specifically for such applications there exist very inexpensive time-to-digital converter chips. For example, Texas Instruments TDC7200 costs just a few dollars and has time uncertainty of some tens of picoseconds.
And "Flash LIDAR" captures every pixel all at once. (But the frame rate is limited by how quickly the buffer can be read out and the sensor readied for the next frame.)
I bet we can find 50,000 people with one dollar to give. Let's make this happen HN!
Hmm, it's a clever hack, but they would use an oscilloscope with an "External trigger" input, like most of the older Rigols. That would let you use the full sample rate without needing to trigger from CH2
Even modern entry level Rigol scopes have external trigger inputs. I've got like one step up from the cheapest model and it has an external trigger input. I think the idea there is that you'd use a bunch of these scopes for the QA on an assembly line. there's a bunch of pass/fail features I've never once touched too.
The view from one end of a laser going between two mirrors (timestamp 1:37) is a fairly good demonstration of the camera having to wait for light to get to it.
Ah, two billion. The first several times I saw this it looked like "twenty eight", which didn't seem terribly interesting.
The video is definitely more interesting than 28 fps but it's also not really 2B fps.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
Nah, it's definitely 2B fps, the frames are just 1x1 wide and a lot of the interesting output comes from the careful synchronization, camera pointing, and compositing of nearly a million 1x1 videos of effectively identical events.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
Others say that you're wrong, but I think you're describing it approximately perfectly.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
> Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images
What is your definition of "video frame" if not this?
> that can be as small as one pixel,"
Why would this be a criteria on the images? If it is, what is the minimum resolution to count as a video frame? Must I have at least two pixels for some reason? Four so that I have a grid? These seem like weird constraints to try and attach to the definition when they don't enable anything that the 1x1 camera doesn't - nor are the meaningfully harder to build devices that capture.
I agree the final result presented to the viewer is a composite... but it seems to me that it's a composite of a million videos.
We already have a useful and established term with which to describe just one pixel from a video frame.
That term is "pixel".
I concur, and as you say it comes from a video frame and thus a video. The fact that the video frame contains only a single one seems to change nothing.
I see.
If I were to agree with this, then would you be willing to agree that the single-pixel ambient light sensor adorning many pocket supercomputers is a camera?
And that recording a series of samples from this sensor would result in a video?
Yes :)
So what if he captured 2 pixels at a time, would that constitute enough to be a definition of a frame?
If there is no lower bound on the size of the image that constitutes a frame, then: Please find the following pictorial summation of my thoughts on this matter to be a sufficient response to your question.
I would probably call it 2 billion fps* (with an asterisk), with a footnote to explain how it’s different than video is typically captured. Especially the fact that the resulting video is almost a million discrete events composited together as opposed to a single event. All of which the video is transparent about, and the methodology actually makes the result more interesting IMO.
I would say that everyone - you, other commenters disagreeing with you, and the video - are all technically correct here, and it really comes down to semantics and how we want to define fps. Not really necessary to debate in my opinion since the video clearly describes their methodology, but useful to call out the differences on HN where people frequently go straight to the comments before watching the video.
A frame is by definition flexible in how many pixels tall or how many pixels wide it is, and there is nothing in the definition that says it can't be 1x1.
Each pixel was captured at 2 billion frames per second, even if techinically they were separate events. Why not call it (FPS / pixels) frames per second?
I thought his method of multiplexing the single channel was very smart. I guess it's more common on 2 channel or high end 4 channel scopes to have a dedicated trigger input, which I've checked this one doesn't have. That said, there're digital inputs that could've been used. Presumably from whatever was controlling the laser.
Frankly it's uncommon to not have a trigger input. I'm not sure I've ever seen a DSO in person without a trigger in.
I was confused by that part of the video exactly because I wondered why he wasn’t using the trigger input. Or, would it normally be possible to use a different channel as the trigger for the first channel?
He explained that. His inexpensive oscilloscope _can_ trigger from the second channel, but only at one billion samples per second. Where’s the fun in that?
Oh that’s right. I saw this video earlier in the week and forgot that.
Thanks.
You’re welcome.
I guess I'm used to it. My main scope is an SDS1204 which doesn't have one (and when I inherited it the digital channels were reportedly blown up) despite being fairly capable for its combination of age and price
How did he determine the principal point to swing around?
As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
Would an upgraded version of this that was actually capable of capturing the progress of a single laser pulse through the smoke be a way of getting around the one-way speed of light limitation [0]? It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
But it's been a while since I read an explanation for why we have the one-way limitation in the first place, so I could be forgetting something.
[0] https://en.wikipedia.org/wiki/One-way_speed_of_light
No, as he explains in the video, this is not a stroboscopic technique, the camera _does_ capture at 2 billion fps. But it is only a single pixel! He actually scans the scene horizontally then vertically and sends a pulse then captures pixel by pixel.
>As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
It is not different phases, but it is a composite! On his second channel he describes the process[0]. Basically, it's a photomultiplier tube (PMT) attached to a precise motion control rig and a 2B sample/second oscilloscope. So he ends up capturing the actual signal from the PMT over that timespan at a resolution of 2B samples/s, and then repeating the experiment for the next pixel over. Then after some DSP and mosaicing, you get the video.
>It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
The point here isn't to measure the speed of light, and my general response when someone asks "can I get around physics with this trick" by answer is no. But I'd be lying if I said I totally understood your question.
[0] https://www.youtube.com/watch?v=-KOFbvW2A-o
No, you cannot escape the conclusion of the limitations on measuring the one-way speed of light.
While the video doesn't touch on this explicitly, the discussion of the different path lengths around 25:00 in is about the trigonometric effect of the different distances of the beam from the camera. Needing to worry about that is the same grappling with the limitation on the one-way speed.
Think of it more like "IRL raytracing", where a ray (the beam) is cast and the result for a single pixel from the point of view is captured, and then it is repeated millions of times.
Even if you had a clock and camera for every pixel, the sync is dependent on the path of the signal taken. Even if you sent a signal along every possible route and had a clock for each route for each pixel (a dizzingly large number) it still isn't clear that this would represent a single inertial frame. As I understand it even if you used quantum entanglement for sync, the path of the measurement would still be an issue. I suggest not thinking about this at all, it seems like an effective way to go mad https://arxiv.org/pdf/gr-qc/0202031
E: Do not trust my math under any circumstances but I believe the number of signal paths would be something like 10^873,555? That's a disgustingly large number. This would reveal whether the system is in a single inertial frame (consistency around loops), but it does not automatically imply a single inertial frame. It's easy to forget that the earth, galaxy, etc are also still rotating while this happens.
The problem (ignoring quantum mechanics) is that the sensors all require an EM field to operate in. So assuming that the speed of light was weighted with a vector in space-time, it would be affected everywhere -- including in the measurement apparatus.
If on the other hand one could detect a photon by sending out a different field, maybe a gravitational wave instead... well it might work, but the gravitational wave might be affected in exactly the same way that the EM field is affected.
This video has brough warm and fuzzy memories from my other life. When I was a scientist back in USSR my research subject required measuring ridiculously low amounts of light and I used photomultiplier tube in photon counting mode for that. I needed current preamp that can amplify nanosecond long pulses and have concocted one out of arsenide-gallium logic elements pushed to work in a linear mode. The tube was cooled by Peltier elements and data fed to a remote Soviet relative of Wang computer [0].
OMG this was back in 1979-1981.
0. - https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%B5%D0%BA%D1%82...
I'd like to see this with the double slit experiment
It would look something like this[1] except with slower visual propagation.
Note that this camera (like any camera) cannot observe photons as they pass through the slits -- it can only record photons once they've bounced off the main path. Thus you will never record the interference-causing photons mid-flight, and you'll get a standard interference pattern.
[1]: https://www.researchgate.net/figure/The-apparatus-used-in-th...
AlphaPhoenix mentions in the description that he wants to try and image an interference pattern, and it seems possible.
Though it wouldn't really be showing you the quantum effect; that's only proven with individual photons at a time. This technique sends a "big" pulse of light relying on some of it being diffusely reflected to the camera at each oscilloscope timestep.
Truly sending individual photons and measuring them is likely impractical as you'd have to wait for a huge time collecting data for each pixel, just hoping the photons happens to bounce directly into the photomultiplier tube.
https://www.youtube.com/watch?v=sc7FlWUAnzA&t=244s
Cameras really have come a long way since the old 15 FPS movies.
Could redshift/blueshift explain why the light appeared to move at different velocity when he moved the camera to another position?
No. It's strictly because of travel time from the "observed spot" to the camera. He explains it in the back half of the video, but for his setup (and every camera everywhere, including your eyes!), it doesn't matter just how long light took to get to the point where it scatters, but also how long it takes for it to get from that point to the detector. If the camera is placed far enough to the side, that time delay is close enough to identical for all samples, but if the camera is in line with the laser, there is a substantial time difference for light scattering close to the camera to light scattering near the far wall.
He actually explains it. It's due to the round trip time increasing from further away water droplets when filmed at an oblique angle. Light hitting a droplet 3m away has a 3x times longer round trip than light hitting a droplet 1m away.
light doesn't move at a fixed rate
light moves AT the speed of causality in that frame of time
causality appears to have a MAXIMUM limit in this universe in an "empty" void
but every time you hear a story of how they "slowed down light" what they actually did is make causality more complex in a dense medium, so slower
Did he actually repeat the experiment 1280x720 times for every pixel?
Yes, the laser was fired at 3 KHz, while the mirrors were slowly scanning across the room.
For each laser pulse, one microsecond of the received signal was digitized with the sample rate of 2 billion samples per second, producing a vector of light intensity indexed by time.
A large number of vectors were stored, each tagged by the pixel XY coordinates which were read out from the mirror position encoders. In post-processing, this accumulated 3D block of numbers was sliced time-wise into 2D frames, making the sequence of frames for the clip.
It's pretty clear he had a computer repeat the experiment that many times in reasonably rapid succession rather than doing it "himself", but yes...
How is the image focused and not a big blur?
Shere are so many levels this could be answered at.
All light in a narrow cone extending from the camera gets recorded to one pixel, entirely independently from other pixels. There's no reason this would be blurry. Blur is an artifact created by lenses when multiple pixels interact.
There is a lens in the apparatus, which is used to project the image from the mirror onto the pinhole, but it is configured so the plane of the laser is in focus at the pinhole.
What I don't understand is how the projection remains in focus as the mirror sweeps to the side, but perhaps the change in focus is so small.
Depth of field, small aperture
It’s in focus because only a small angular area from the scene contributes to the light that reaches the sensor. You get blur when sensor elements (individual pixels) receive light from too wide of an area.
[dead]
Techniques like this are/were used to film nuclear explosions (but with a single explosion).
Who detonated 2073600 bombs?
They did the way more expensive version briefly mentioned towards the start of the video, having 12+ cameras with ridiculously fast shutters (as low as 10 nanoseconds) arranged to run in seqeunce.
That was probably not the more expensive version in that case.
Why?
Because of the cost of making and detonating so many bombs.
Scanning a single pixel over an image? How does that work with an explosion? The laser pointer is reproducible
https://en.wikipedia.org/wiki/Rapatronic_camera
The rapatronic camera had an incredibly fast electronic shutter. To record a video they needed one camera per frame. Rather like "bullet time" in the movies. The technique in the youtube video is completely different.
It's not completely different. I'd argue it's the exact opposite. Instead of using a single single-pixel camera to record video of a repeatable event, a sequence of regular film cameras captured photographs of an unrepeatable event.
But that bears no relation to what happened in the video.
He did a good job on his setup, but I have to think that adding a spinning mirror would have made everything much faster and easier.
He could then capture an entire line quite quickly, and would only need a 1 dimensional janky mirror setup to handle the other axis. And his resolution in the rotating axis is limited only by how quickly he can pulse the laser.
Of course, his janky mirror setup could have been 2 off-the-shelf galvos, but I guess that isn't as much "content".
I think a spinning mirror would make it a lot harder. He's only moving the mirror after the "animation" finishes. So it's capture video, step by 1 pixel, capture video, step by 1 pixel, capture video, etc... He's replaying the scene ~1 million times, for 1 million unique single pixel 2 billion fps videos.
Basically the constantly spinning motor would remove the complexity and slowdown of accelerating and decelerating it at the start and end of the motion. To capture a line, he's already scanning it across the scene and triggering the laser at specific points (when the encoder in the servo reaches certain values), if it was continuously rotating he could just do the same thing.
Though I don't think it would speed things up much, from what he was saying in one of the appendix videos on his second channel he doesn't do things like triggering the laser multiple times for each pixel to reduce noise because the bottleneck is copying the data off of the scope and it would stretch from hours to days to run.
Yeah, spinning at a constant rate with an encoder for triggering would probably be a bit more consistent. But potentially more of a mechanical headache. And he does need a pretty big mirror due to being limited by the amount of light he can focus on the sensor, I'm not sure that there are galvos available with such a large area (especially for a reasonable price).
Are you suggesting it would be easier if the mirror spun at 2 billion revolutions per second?
But I think he does capture an entire line quite quickly. As I understood it, he “scans” a line of pixels in seconds.