I have written some simple scripts allowing to record WebM (VP9+Opus) videos from the two PinePhone cameras:
https://svn.calcforge.org/viewvc/pinephone-videorec/
They are tested on an up-to-date Manjaro ARM Plasma Mobile Edition for the PinePhone (in the qtquickconsole
Terminal
app that ships with it).
The way they work is that you run
./record.sh "Your video name"
or
./selfrecord.sh "Your selfie video name"
from the CLI (you can of course omit the quotes if you use a name without spaces or other special characters, this is shell syntax), let it record video, and push q
on the virtual keyboard when done. (Do not push Ctrl+C or the encoding step will not run.) Then wait for a long time until the video is actually encoded…
Notes:
- Video encoding takes a long time and requires a lot of power. Expect encoding to take up to 50 times the duration of the video, and keep a power bank ready when encoding long videos.
- Since encoding is so slow, video is initially recorded in a lossless format and then encoded to WebM. Keep in mind that the raw videos in lossless format are huge. So the length of the videos is also limited by the amount of space left on the device (eMMC or MicroSD) containing the current directory.
- The raw video is currently not deleted automatically, you will probably want to
rm
the large.mkv
video and keep only the encoded.webm
. (They are both Matroska containers, but the.mkv
is huge lossless FFVHuff+FLAC, whereas the.webm
is small VP9+Opus. The size difference is a factor >50.) - The script will at this time only work on the original PinePhone, not on the PinePhone Pro. The PinePhone Pro has different (higher-resolution) cameras for which you need working kernel drivers (which are not there yet), and then will also have to adjust the hardcoded sensor names and probably video resolutions in the script.
- The script at this time only supports PulseAudio, it was not tested with PipeWire, and at least the sound source name will probably have to be changed for PipeWire.
- The orientation of the phone is automatically detected, and the video rotated in the encoding step. (The rotation does not make much of a speed difference compared to just encoding anyway.)
- There is no preview. All you get to see on the display is the progress output from FFmpeg.
- The first few frames of video (in the first second) will be bad (I got an all-green frame, an all-black frame, then a few frames gradually converging to good images).
- The video is frozen (whereas sound keeps playing) during the second second. Then it finally starts running normally. Maybe I should just cut off the first two seconds during encoding?