Creating a WebM video with subtitles

This is mostly a note pad for myself with quick instructions about how to create WebM video file with embedded subtitles from the command line.

Having a source video with A/V tracks that are WebM compliant, and a subtitles track, this is the quick reference command to copy and paste:

$ 
ffmpeg -i video.mkv -c:a copy -c:v copy -c:s webvtt video.webm

Now the long and boring story.

Lately, my preferred multimedia container is Matroska and, for easily managing videos with multiple tracks without reencoding them I use the great MKVToolNix application.

Now, when I want to put a video online, the 2 video containers with wider support among the Open Source browsers are .mp4 and .webm. I’d rather use WebM whenever possible and, additionally, it is a subset of Matroska. Indeed, MKVToolNix has an option to “Create a WebM compliant file” as output.

Up until now, when dealing with a subtitles track I’ve always tried to use the SubRip subtitles format (.srt). However, when I tried with MKVToolNix to create a WebM compliant file with a .srt track, I’d get the error The codec type 'text subtitles' cannot be used in a WebM compliant file.

Hence, in order to not get too complicated, I’d generate a Matroska file and just convert it to .mp4 without reencoding with FFmpeg:

$ 
ffmpeg -i video.mkv -c:v copy -c:a copy -c:s mov_text video.mp4

However, recently a colleague warned me that FireFox in Linux doesn’t reproduce the audio track of a .mp4 file if the codec is one of Vorbis or Opus, the ones used with WebM. Yes, FireFox supports WebM and, hence, those audio codecs, but refuses to play them if they belong inside a .mp4 container (!!)

OK, so what now … a first quick check to the WebM’s documentation told me that WebM doesn’t support subitles but that intends to support WebVTT in the future …

… but the same colleague told me that WebM actually does support WebVTT. A second look to that very same page tells me that he is right! Confusing …

Fine. Let’s use MKVToolNix to create that WebM compliant file but with a WebVTT track … oh, god I got the error The codec type 'passthrough' cannot be used in a WebM compliant file. So, does WebM support subtitles in the end or not?!!!

The answer is that WebM’s way of storing WebVTT subtitles is based on an outdated WebVTT spec, and fundamentally different than Matroska’s way (which was based on the current WebVTT specs) and MKVToolNix has no intention of supporting the former.

So, WebM does support subtitles. They need to be in WebVTT format but, in order to create a WebM with subtitles the easiest way I’ve found is through FFmpeg.

End of the story?

It seems not. GStreamer is not able to deal with the WebVTT subtitles track in a WebM or Matroska file, while other programs, like mpv, have no problems with them.

How to listen to a YouTube playlist with mpv from the command line

This is mostly a note pad for myself with quick instructions about how to listen to a YouTube playlist with mpv from the command line.

mpv is a great player which integrates the usage of yt-dlp. That way, it is capable of playing any of the content supported by yt-dlp.

I was looking for a way to listen to a YouTube playlist without any GUI and avoiding to waste my CPU on decoding the video part I am not interested in.

This is the quick reference command to copy and paste:

$ mpv --input-ipc-server=<custom_path_to>/socket --vo=null --ytdl-raw-options="yes-playlist=,format=best*[vcodec=none]" "<youtube_playlist_url>" &

Explanation of the parameters:

  • --vo=null: No video output.

  • --ytdl-raw-options: Comma separated list of command options that will be directly passed to yt-dlp.
    • yes-playlist=: Needs the “=” and to be empty. If provided a playlist, iterate over its elements.
    • format=best*[vcodec=none]: Select the best available format and discard the video content.
  • --input-ipc-server=<custom_path_to>/socket: Creation of a socket for direct communication with mpv. Since we don’t have a GUI, this socket will allow us to control the mpv instance. For example, if we want to move to the next item in a playlist, we will run:
$ echo playlist-next | socat - <custom_path_to>/socket

I hope this is helpful!

Note:

Don’t use --vo=null if you would like to have a GUI. Then, you could also skip creating the socket with --input-ipc-server=<custom_path_to>/socket.

In that case, you could just avoid the whole format=best*[vcodec=none] yt-dlp option by passing the --no-video flag. That will save you the video processing although I’m unsure it will avoid the network transfer of the stream so better keep passing that option.

DIY Time Line cards game

In 2016, one of my sisters proposed to craft a gift for our parents coinciding with their 50th wedding anniversary that would work as a photo album, but more fun. She proposed that we would make our own game with family pictures following the same idea than Timeline.

Timeline General Interest Tin Box

During those years, Timeline was a popular cards game with a very simple mechanic. Each card pictures an event happening in a very specific moment in time. In one side, the card only states the name of the event while in the other side of the card it reveals the year that event happened and, sometimes, further details about the event.

The game consists in placing those cards in a time line from the further past to the present. Hence, its name. Additionally, the game was visually beautiful and, at the time, was done with quality materials in a nice embossed tin box.

Opened Timeline tin box

We decided to follow the same style but, since we wanted it to work as a photo album, the cards would be A5 instead of the tiny size of the original game. My sisters did the heavy work of finding the bulk of the pictures while I created the artistic material and selected the best pictures from the very big selection I received. In the end, we expanded from 1942 to 2016, for more than 120 cards.

As a FLOSS lover, in my creation process I used GIMP, Inkscape, Scribus and Krita and placed the basic templates into a repository to allow its usage by other people.

I used Inkscape to create the banners at the bottom of the pictures, GIMP was used for the modification of the source pictures so they will nicely sit on the cards, and Scribus was used to generate the PDFs to be sent for printing.

Ideally, the cards would be in portrait orientation …

“Los padrinos bailando” – Front
“Los padrinos bailando” – Back

… but, unavoidably, some needed to be in landscape.

“Me quedo” – Front
“Me quedo” – Back

We would also need a place in which to store the cards. An embossed tin box would have been great but way too expensive to the very limited amount we wanted to order. In the end, we decided to order an A5 cookies box from a provider which would be able to print on the lid.

I took heavy inspiration from the original designs of the existing game to make a custom cover with Inkscape and converted the resulting PNG with Krita to a TIFF using the desired CMYK color profile.

Custom “50th anniversary edition” Time Line tin box

The final result was quite nice.

Feel free to use the templates from the public repository. Contributions are also welcomed!

Basic LibreOffice invoice in Finland

WARNING

I have no responsibility if this guide and the linked examples are bogus and cause you any harm. The purpose of this post is solely as a personal note for myself. Follow and use it at your own risk.


Through this post we’ll see how to create an invoice with LibreOffice which would include the Finnish Reference Number (Viitenumero) or the RF Creditor Reference generated from that Finnish Reference Number and the Finnish Bank Bar Code (Pankkiviivakoodi).


The Finnish Reference Number

The Finnish Reference Number (viitenumero) identifies a specific bank transfer. In practice, it is used in invoices sent to a customer by a company or similar entity.

When we create an invoice in Finland, we want to include such number so its payment is following a standard set by the Finnish banks. For example, some banks may charge a commission if, when setting a payment, this number is not included.

The generation of this number follows a simple algorithm and I’ve compiled several implementations in different programming languages. Since we want to use it in LibreOffice Calc, the most suitable way would be through a (Basic) macro. It’s usage is as follows:

=LASKEVIITE(number)

Which number to pass you say? Well, it is pretty much whatever you want to pass (check the algorithm for the restrictions) but I basically pass the incremental invoice number. In my case, I just number my invoices like YYYY### meaning the 4 digits of the current year plus 3 digits for the incremental number of that invoice along the year (not in my wildest dreams I would generate more than 999 invoices in a single year). For example 2021023 would indicated the 23rd invoice generated in 2021. The result in the cell will show 202 10236, which is the provided number plus a check digit calculated by the algorithm and returned in a proper formatting (groups of 5 numbers).

The RF Creditor Reference

The RF Creditor Reference is an international business standard for an number preceded by the “RF” letters which serves a similar purpose than the Finnish Reference Number: identifying a specific payment.

The generation of this number also follows a simple algorithm and it was first implemented within the SEPA rulebook 3.2. Since having both numbers in the same invoice would be redundant and the RF Creditor Reference is international, we could just use this number and it should be accepted by the Finnish banks and some other banks, particularly some of those accepting SEPA transfers.

If we use the Finnish Reference Number, the calculation of the RF Creditor Reference check digits is pretty simple. We can just use the following formula in a cell:

=TEXT(98-MOD(SUBSTITUTE(LASKEVIITE(number)," ","")*1000000+271500,97),"00")

Following the example above, for the generated Finnish Reference Number 202 10236, the result in the cell will show 42. Then, we only have to concatenate everything together to get the full RF Creditor Reference: RF42 2021 0236.

Bar codes

Finnish banks have jointly developed the (Finnish) Bank Bar Code (Pankkiviivakoodi) in order to accelerate payments and to prevent erroneous data from being keyed in. The Bank Bar Code is a Code 128 bar code explicitly developed to be used with reference numbers in invoicing.

Hence, we want to add support to our Calc spreadsheet for generating bar codes. Fortunately, some Jiří Gabriel created a sophisticated (basic) macro for generating a whole set of 1D and 2D bar codes. This macro is able to generate the bar codes as graphical objects or as text which will be shown as the proper bar code when the companion BarsAndSpaces.ttf font is applied to the text. In my examples, I use the latter but you may prefer the former since that would allow you not to have that font installed.

This formula will show as cell result a Code 128 bar code containing the passed number if we apply the BarsAndSpaces.ttf font to its content:

=ENCODEBARCODE(CELL("SHEET"),CELL("ADDRESS"),number,0,0)

The Finnish Bank Bar Code

The Bank Bar Code is a form of presenting payment transaction data, approved by Finance
Finland (FFI). As commented above, it was jointly developed by Finnish banks and needs a reference number.

The Bank Bar Code has 2 versions currently in active use: version 4 uses the Finnish Reference Number while version 5 uses the RF Creditor Reference. Below, you can see how the 54 length structure of this bar code is divided depending on the version:

DataLengthValue
Version14
The numeric part of the payee’s bank
account number (IBAN)
16N
Euros6N
Cents2N
Reserve3000
Finnish Reference Number20N
Due Date6YYMMDD
(Finnish) Bank Bar Code v.4
DataLengthValue
Version15
The numeric part of the payee’s bank
account number (IBAN)
16N
Euros6N
Cents2N
The numeric part of the RF
Creditor Reference
23N
Due Date6YYMMDD
(Finnish) Bank Bar Code v.5

As you see, the algorithm is simple enough that can be easily implemented in a spreadsheet.

The full documentation for the Bank Bar Code is provided by Finance Finland.

Invoice examples

After all the blabbering above we get to the part that is, probably, the most interesting for you: the ODS examples.

I hope it helps you! 😀️

Appendix: QR Codes

Finance Finland also provides Guidelines for the use of QR code in credit transfer forms. Additionally, our bar codes macro also supports QR code generation. In other words, it would be possible to create an invoice with LibreOffice Calc featuring this QR code (QR-koodi).

However, I’ll leave this for another day … 😉️

Replaying 3D traces with piglit


If you don’t know what is traces based rendering regression testing, read the appendix before continuing.


The Mesa community has witnessed an explosion of the Continuous Integration interest in the last two years.

In addition to checking the proper building of the project, integrating the testing of its functional correctness has become a priority. The user space graphics drivers exhibit a wide variety of types of tests and test suites. One kind of those tests are the traces based rendering regression testing.

The public effort to add this kind of tests into Mesa’s CI started with this mail from Alexandros Frantzis.

At some point, we had support for replaying OpenGL, Vulkan and D3D11 traces using apitrace, RenderDoc and GFXReconstruct with the in-tree tool tracie. However, it was a very custom solution made to the needs of Mesa so I proposed to move this codebase and integrate it into the piglit test suite. It was a natural step forward.

This is how replayer was born into piglit.

replayer

The first step to test a trace is, actually, obtaining a trace. I won’t go into the details about how to create one from scratch. The process is well documented on each of the tools listed above. However, the Mesa community has been collecting publicly distributable traces for a while and placing them in traces-db whose CI is copying them to Freedesktop.org’s MinIO instance.

To make things simple, once we have built and installed piglit, if we would like to test an apitrace created OpenGL trace, we can download from there with:

$ replayer.py download \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --db-path ./traces-db \
 	 --force-download \
 	 glxgears/glxgears-2.trace

The parameters are self explanatory. The downloaded trace will now exist at ./traces-db/glxgears/glxgears-2.trace.

The next step will be to dump an image from the trace. Since it is a .trace file we will need to have apitrace installed in the system. If we do not specify the call(s) from which to dump the image(s), we will just get the last frame of the trace:

$ replayer.py dump ./traces-db/glxgears/glxgears-2.trace

The dumped PNG image will be at ./results/glxgears-2.trace-0000001413.png. Notice, the number suffix is the snapshot id from the trace.

Dumping from a trace may result in a range of different possible images. One example is when the trace makes use of uninitialized values, leading to undefined behaviors.

However, since the original aim was performing pre-merge rendering regression testing in Mesa’s CI, the idea is that replaying any of the provided traces would be quick and the dumped image will be consistent. In other words, if we would dump several times the same frame of a trace with the same GFX stack, the image will always be the same.

With this precondition, we can test whether 2 different images are the same just by doing a hash of its content. replayer can obtain the hash for the generated dumped image:

$ replayer.py checksum ./results/glxgears-2.trace-0000001413.png 
f8eba0fec6e3e0af9cb09844bc73bdc8

Now, if we would build a different commit of Mesa, we could check the generated image at this new point against the previously generated reference image. If everything goes well, we will see something like:

$ replayer.py compare trace \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --device-name gl-vmware-llvmpipe \
 	 --db-path ./traces-db \
 	 --keep-image \
 	 glxgears/glxgears-2.trace f8eba0fec6e3e0af9cb09844bc73bdc8
[dump_trace_images] Info: Dumping trace ./traces-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./traces-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./blog-traces-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

PIGLIT: {"images": [{"image_desc": "glxgears/glxgears-2.trace", "image_ref": "f8eba0fec6e3e0af9cb09844bc73bdc8.png", "image_render": "./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413-f8eba0fec6e3e0af9cb09844bc73bdc8.png"}], "result": "pass"}

replayer‘s compare subcommand is the one spitting a piglit formatted test expectations output.

Putting everything together

We can make the whole process way simpler by passing the replayer a YAML tests list file. For example:

$ cat testing-traces.yml
traces-db:
  download-url: https://minio-packet.freedesktop.org/mesa-tracie-public/

traces:
  - path: gputest/triangle.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: c8848dec77ee0c55292417f54c0a1a49
  - path: glxgears/glxgears-2.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: f53ac20e17da91c0359c31f2fa3f401e
$ replayer.py compare yaml \
 	 --device-name gl-vmware-llvmpipe \
 	 --yaml-file testing-traces.yml 
[check_image] Downloading file gputest/triangle.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/gputest/triangle.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/gputest/triangle.trace
// process.name = "/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest"
397 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)

510 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)


/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest
[dump_trace_images] Running: eglretrace --headless --snapshot=510 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace- ./replayer-db/gputest/triangle.trace
Wrote ./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace-0000000510.png

OK
[check_image]
    actual: c8848dec77ee0c55292417f54c0a1a49
  expected: c8848dec77ee0c55292417f54c0a1a49
[check_image] Images match for:
  gputest/triangle.trace

[check_image] Downloading file glxgears/glxgears-2.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)


/usr/bin/glxgears
error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./replayer-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

replayer features also the query subcommand, which is just a helper to read the YAML files with the tests configuration.

Testing the other kind of supported 3D traces doesn’t change much from what’s shown here. Just make sure to have the needed tools installed: RenderDoc, GFXReconstruct, the VK_LAYER_LUNARG_screenshot layer, Wine and DXVK. A good reference for building, installing and configuring these tools are Mesa’s GL and VK test containers building scripts.

replayer also accepts several configurations to tweak how to behave and where to find the actual tracing tools needed for replaying the different types of traces. Make sure to check the replay section in piglit’s configuration example file.

replayer‘s README.md file is also a good read for further information.

piglit

replayer is a test runner in a similar fashion to shader_runner or glslparsertest. We are now missing how does it integrate so we can do piglit runs which will produce piglit formatted results.

This is done through the replay test profile.

This profile needs a couple configuration values. Easiest is just to set the PIGLIT_REPLAY_DESCRIPTION_FILE and PIGLIT_REPLAY_DEVICE_NAME env variables. They are self explanatory, but make sure to check the documentation for this and other configuration options for this profile.

The following example features a similar run to the one done above invoking directly replayer but with piglit integration, providing formatted results:

$ PIGLIT_REPLAY_DESCRIPTION_FILE=testing-traces.yml PIGLIT_REPLAY_DEVICE_NAME=gl-vmware-llvmpipe piglit run replay -n replay-example replay-results
[2/2] pass: 2   
Thank you for running Piglit!
Results have been written to replay-results

We can create some summary based on the results:

# piglit summary console replay-results/
trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace: pass
trace/gl-vmware-llvmpipe/gputest/triangle.trace: pass
summary:
       name: replay-example
       ----  --------------
       pass:              2
       fail:              0
      crash:              0
       skip:              0
    timeout:              0
       warn:              0
 incomplete:              0
 dmesg-warn:              0
 dmesg-fail:              0
    changes:              0
      fixes:              0
regressions:              0
      total:              2
       time:       00:00:00

Creating an HTML summary may be also interesting, specially when finding failures!

Wishlist

  • Through different backends, replayer supports running apitrace, RenderDoc and GFXReconstruct traces. We may want to support other tracing tools in the future. The dummy backend used for functional testing is a good starting point when writing a new backend.
  • The solution chosen for checking whether we detect a rendering regression is dependent on having consistent results, as said before. It’d be great if we could add a secondary testing method whenever the expected rendered image is variable. From the top of my head, using exclusion masks could be a quick single-run solution when we know which specific areas in a rendered scenario are the ones fluctuating. For more complex variations, a multi-run based solution seems to be the best option. EzBench has a great statistical approach for this!
  • The current syntax of the YAML test list files implies running the compare subcommand with the default behavior of checking against the last frame of the tested trace. This means figuring out which call number is the one of the last frame first. It would be great to support providing the call numbers directly in the YAML files to be able to test more than just the last frame and, additionally, cut down the time taken to run the test.
  • The HTML generated summary allows us to see the reference and generated image during a test run side to side when it fails. It’d be great to have also some easy way of checking its differences. Using Rembrandt.js could be a possible solution.

Thanks a lot to the whole Mesa community for helping with the creation of this tool. Alexandros Frantzis, Rohan Garg and Tomeu Vizoso did a lot of the initial development for the in-tree tracie tool while Dylan Baker was very patient while reviewing my patches for the piglit integration.

Finally, thanks to Igalia for allowing me to work in this.


Appendix

In 3D computer graphics we say “traces”, for short, to name the files generated by 3D APIs capturing tools which store not only the calls to the specific 3D API but also the internal state of the 3D program during the capturing process: shaders, textures, buffers, etc.

Being able to “record” the execution of a 3D program is very useful. Usually, it will allow us to replay the execution without the need of the original program from which we generated the trace, it will also allow in-depth analysis for debugging and performance optimization, it’s a very good solution for sharing with other developers, and, in some cases, will allow us to check how the replay will happen with different GPUs.

In this post, however, I focus in a specific usage: rendering regression testing.

When doing a regression test what we would do is compare a specific metric obtained by replaying the trace with a specific version of the GFX software stack against the same metric obtained from a different version of the GFX stack. If the value of the metric changes we have found a regression (or an improvement!).

To make things simpler, we would like to check changes happening just in one of the many elements of the software stack. The most relevant component is the user space driver. In particular, I care about the Mesa drivers and the GNU/Linux stack.

Mainly, there are two kinds of regression testing we can do with a trace: performance or rendering regression testing. When doing a performance one, the checked metric(s) usually are in terms of speed or memory usage. In the case of the rendering ones what we would do is comparing the rendered output at one (or many) point during the trace replay. This output, a bitmap image, is the metric that we will compare in between two different points of the Mesa driver. If the images differ, we may have found a regression; artifacts, improper colors, etc, or an enhancement, if the reference image is the one featuring any of these problems.

Installing LineageOS in the Sony Xperia XZ2 Compact Dual (in GNU/Linux) 5/5: Appendixes

WARNING

I have no responsibility whatsoever if this guideline causes any harm to your device. The intention of these posts are solely as personal notes for myself. Follow them at your own risk.

WARNING

Through these steps I will unlock the phone’s bootloader, erasing all data. This includes the DRM keys stored in the Trim Area (TA) partition. I’ll attempt backing them up but, as of today, there is no way of restoring them to the previous state nor knowing if the actual backup is usable at all.

Without these DRM keys, several audio and video proprietary functionality provided by Sony won’t be available including some camera post-processing features, color gamut profiles, white balance, noise reduction, X-Reality Video Enhancement, DSEE HX, ClearAudio+, and Widevine L1 support for HD Netflix.

Appendixes

Previously, we downgraded Sony’s Stock firmware, backed up the Trim Area, installed LineageOS, and, finally, bring back Sony’s stock camera app.

The installation had some bumps, so here a list of the things I could comment about …

Installing TWRP

I wanted to install the TWRP recovery tool just because I have experience with it and like it better. I did and used it successfully in the first installation of the LineageOS that I did. However, the LineageOS installation also installed the Lineage Recovery so I lost TWRP and, funnily enough, the steps I followed to install TWRP won’t work any more. Hence, I’m stack with the Lineage Recovery and that’s why I explain how to directly install Lineage Recovery in the previous post.

For the curious, these are the steps I followed.

First, we need to download the Unofficial TWRP recovery and extract it in a folder. After this I rebooted the device in fastboot mode via software. This is important. Don’t do it using the hard keys.

Once in fastboot mode:

root$ fastboot --disable-verity --disable-verification flash vbmeta vbmeta.img
[...]
root$ fastboot boot twrp-apollo.img

This will boot temporarily into TWRP. Now, we need to do the actual flash of TWRP. First, we need to copy the image:

root$ adb push twrp-apollo.img /tmp

And, from TWRP: Advanced menu -> Install recovery ramdisk -> Choose /tmp/twrp-apollo.img. After finishing installing TWRP, we can reboot into Recovery using the newly installed TWRP.

From there, you can also wipe the system and data and install LineageOS, Open GApps and Magisk. Interestingly, TWRP provides the possibility of pushing these files and installing without using adb sideload. This is interesting because it will make also easier to install a customized Open GApps package using the gapps-config file.

Notice that, more often than not, when booting into TWRP recovery the touchscreen wouldn’t work. This is a random behavior and rebooting again into Recovery may fix it in the next try.

Installing the Android System Webview from the Open GApps

As mentioned, I wanted to customize Open GApps to install the Android System Webview.

I did.

However, choosing this will only install a Google WebView Stub. What this means is that you won’t really have a WebView and, hence, when booting in the system, anything which makes use of a WebView will just crash (including registering into Google for using Google Play).

In other words, you need to install the real package but you won’t have an easy way of doing that. You will have to manually download the real APK from some 3rd party site and install it, for example, with adb install.

In the end I just reinstalled without it after reading that the WebView used by LineageOS is based in Chromium. So far so good.

Migration from my older phone

In order to move all my data from my Xiaomi Redmi 2 running LineageOS 14.1 I used the OAndBackupX application. It needs root rights but, fortunately, I had them in both systems and devices.

Some comments:

  • Make sure of using the exact same version of OAndBackupX in both devices.
  • The app suggests to use encryption when creating the backups. I didn’t use it since it was doing things more complicated.
  • I only did back ups of the user applications. First I also did back ups of the special cases (user accounts, for example) and after restoring, that caused me a headache with NextCloud and DavX⁵. Of course, jumping from 14.1 to 17.1 could cause these kind of troubles.

I initially used a SD Card to move the back ups. I wouldn’t recommend it now. If you have limited memory just create the back ups in steps and move them in batches using adb. With both devices plugged with USB cables and in developer mode:

root$ adb devices
List of devices attached
origin device
destination device

root$ adb -s origin root
restarting adbd as root
root$ adb -s destination root
restarting adbd as root
root$ # Create some backups in the origin device
root$ adb -s origin pull /storage/emulated/0/OABX .
[...]
root$ adb -s destination push OABX/* /storage/emulated/0/OABX/
[...]
root$ # Restore the backups in the destination device
root$ adb -s origin shell
origin:/ $ rm -rf /storage/emulated/0/OABX/*
origin:/ $ exit
root$ adb -s destination shell
destination:/ $ rm -rf /storage/emulated/0/OABX/*
destination:/ $ exit
root$ # Create some more backups in the origin device and repeat the process

Of course, no need to delete the back ups if you don’t need to.

Installing LineageOS in the Sony Xperia XZ2 Compact Dual (in GNU/Linux) 4/5: Bringing back Sony’s stock camera app

WARNING

I have no responsibility whatsoever if this guideline causes any harm to your device. The intention of these posts are solely as personal notes for myself. Follow them at your own risk.

WARNING

Through these steps I will unlock the phone’s bootloader, erasing all data. This includes the DRM keys stored in the Trim Area (TA) partition. I’ll attempt backing them up but, as of today, there is no way of restoring them to the previous state nor knowing if the actual backup is usable at all.

Without these DRM keys, several audio and video proprietary functionality provided by Sony won’t be available including some camera post-processing features, color gamut profiles, white balance, noise reduction, X-Reality Video Enhancement, DSEE HX, ClearAudio+, and Widevine L1 support for HD Netflix.

Bringing back the stock camera

In the previous posts we have downgraded the stock firmware from Sony, backed up the Trim Area (TA) partition and installed LineageOS.

Thanks to the great people from the xda-developers forum we have the chance to add Sony’s stock camera app. We will adb sideload it the same way we installed Magisk in the previous post, for example

First, the zip is called SemcCamera (SemcCamera-xz2c-52.1.A.2.1.zip at the moment of writing this) and it is, currently, the only add-on available for the Official LineageOS 17.1 image for the xz2c phone.

We download the file, reboot into Recovery Mode and plug the phone to the computer with the USB cable. Select Apply Update -> Apply from ADB:

root$ adb sideload SemcCamera-xz2c-52.1.A.2.1.zip
Total xfer: 1.00x

Now, Go back -> Reboot system now.

Currently, the stock camera won’t work out of the box. It needs to disable SELinux or set as Permissive. Luckly, since we have Magisk installed and we can grant root privileges, we can install SELinuxModeChanger and do so.

That’s it, now you should be able to use Sony’s stock camera!

Extra treat: add Sony’s Bokeh app

Sony also provides a nice application for taking fancy photos: Bokeh (Background defocus).

Unfortunately, we cannot install it just from Google’s Play Store since it claims that the app is not compatible with this phone.

However, we can force the installation, for example, using the Aurora Store.

Finally, if you want to know about some bumps I got during the road, continue to the Appendixes.

Installing LineageOS in the Sony Xperia XZ2 Compact Dual (in GNU/Linux) 3/5: Installing LineageOS

WARNING

I have no responsibility whatsoever if this guideline causes any harm to your device. The intention of these posts are solely as personal notes for myself. Follow them at your own risk.

WARNING

Through these steps I will unlock the phone’s bootloader, erasing all data. This includes the DRM keys stored in the Trim Area (TA) partition. I’ll attempt backing them up but, as of today, there is no way of restoring them to the previous state nor knowing if the actual backup is usable at all.

Without these DRM keys, several audio and video proprietary functionality provided by Sony won’t be available including some camera post-processing features, color gamut profiles, white balance, noise reduction, X-Reality Video Enhancement, DSEE HX, ClearAudio+, and Widevine L1 support for HD Netflix.

Upgrading to latest stock firmware

In the previous posts we have downgraded the stock firmware from Sony and backed up the Trim Area (TA) partition.

Since the guideline to install LineageOS mandates that we have the latest stock firmware from Sony running in the phone, I upgraded now from the downgraded exploitable version.

Using the built-in updater doesn’t seem to work any more (?!) so I had to flash the latest stock firmware in a similar fashion as I did for downgrading.

I already downloaded the firmware and it is already properly extracted so, in this case, I won’t need to use flashtool, only newflasher.

These would be steps to follow once I connected the phone to the USB cable in flashable mode:

root$ rm "H8324_Customized NOBA_1313-6167_52.1.A.3.49_R4C"/*ta
root$ rm "H8324_Customized NOBA_1313-6167_52.1.A.3.49_R4C"/boot/*ta
root$ cp -a newflasher.x64 "H8324_Customized NOBA_1313-6167_52.1.A.3.49_R4C"
root$ cd "H8324_Customized NOBA_1313-6167_52.1.A.3.49_R4C"
root$ chmod +x newflasher.x64
root$ ./newflasher.x64

[...]

Reboot mode at the end of flashing:
typa 'a' for reboot to android, type 'f' for reboot to fastboot, type 's' for reboot to same mode, type 'p' for poweroff, and press ENTER.
a

[...]

Optional step! Type 'y' and press ENTER if you want dump trim area, or type 'n' and press ENTER to skip.
Do in mind this doesn't dump drm key since sake authentifiction is need for that! But it is recommend to have dump in case hard brick!
n

[...]

Recommended step to skip this! Type 'y' and press ENTER if you want flash persist partition, or type 'n' and press ENTER to skip.
More info https://forum.xda-developers.com/xperia-xz1-compact/help/android-attest-key-lost-bootloader-t3829945
n

[...]

Device is put now out of flash mode.
Sent command: Sync
Waiting sync to finish…
……………… done
Sent command: continue.
Done.
Closing device.

Notice the questions and the answers. After a while, the phone will complete its reboot and we will be able to verify that the running firmware is the one flashed.

As explained in the previous posts, enable once again developer mode and USB debugging in the phone. For the next steps, enable also OEM unlocking in the developer options.

Installing LineageOS

We’ll follow the official guide.

First, we’ll unlock the bootloader. This will finally wipe out the TA partition losing the DRM keys. This is a point of no return.

I checked that my phone’s bootloader can be unlocked. I opened the phone application and dialed *#*#7378423#*#*. The service menu is now open. Go to Service info -> Configuration and checked that the Rooting status: states Bootloader unlock allowed: Yes. Also, noted down the IMEI.

Now, I connected the device to my PC with the USB cable and continued to set it in flashable mode:

root$ adb reboot bootloader
root$ fastboot devices
[...] fastboot
root$ fastboot oem unlock 0x<insert your unlock code>
…
OKAY [ 16.947s]
finished. total time: 16.947s

Unplug and start the phone. As explained in the previous posts, enable once again developer mode and USB debugging in the phone.

At this point, I decided not to keep following the LineageOS guideline since it explains how to use the Lineage Recovery. Instead, I used the TWRP recovery tool just because I have experience with it and like it better. However, I don’t recommend it as I’ve explained in the Appendixes.

Therefore, I explain here the same steps than the official guide. Download the latest Lineage Recovery and LineageOS installation package and connect the device to your PC with the USB cable and continued to set it in flashable mode:

root$ adb reboot bootloader
root$ fastboot devices
[...] fastboot
root$ fastboot flash boot <lineage_recovery>.img

Power off the device and now turn it on into Recovery mode by pressing Volume Down + Power.

Now, being booted into the Recovery mode, I downloaded a pre-install copy-partitions-20200903_1329.zip tool and selected Apply Update -> Apply from ADB.

root$ adb sideload copy-partitions-20200903_1329.zip

Once finished, reboot again into Recovery mode: Go back -> Advanced -> Reboot to recovery. Once back, Factory reset -> Format data/factory reset. Once finished, let’s install LineageOS: Go back -> Apply Update -> Apply from ADB.

root$ adb sideload lineage-17.1-<date>-xz2c-signed.zip

Now, before booting into the system, we want to add some more stuff: Open GApps and Magisk.

Magisk is a suite for customizing Android. Most importantly, it provides root access to the device, which I wanted to have in order to create backups of the installed applications and restore them, among other things.

Open GApps will provided us the core functionality provided by Google for Android. Most importantly, it will provide us Google Play. It is critical to install this before booting for the first time into the system. Otherwise, we would have to repeat the Factory reset step and wipe out all our personal data before attempting to install it.

Open GApps provides different size packages. The recommended for LineageOS are the pico or nano, but nothing bigger. Since I’m a troublemaker, I also wanted to customize the package to include the Android System Webview and remove even further packages from the pico and nano packages.

Hence, I downloaded the stock package for the ARM64 platform and the 10.0 Android version.

For reasons that I explain in the Appendixes, I finally didn’t install the Android System Webview but did remove some packages any way. The way of customizing a package is through a gapps-config file. However, to use this method the installation cannot be done through adb sideload and, unfortunately, the Lineage Recovery only offers this way of installing packages into the system.

Luckily enough, the Open GApps package is no more than a zip file so I could embed my options directly into the installer. So, after installing the LineageOS system, I rebooted again into Recovery mode: Go back -> Advanced -> Reboot to recovery and, back in Recovery, let’s install Open GApps: Apply Update -> Apply from ADB.

root$ mkdir my_gapps
root$ cd my_gapps
root$ unzip ../open_gapps-arm64-10.0-stock-<date>.zip
root$ cat installer.sh # Added the following lines in bold

[...]

echo "Include" > "$TMP/my_config.txt"
echo "" >> "$TMP/my_config.txt"
echo "CalSync" >> "$TMP/my_config.txt"
echo "DialerFramework" >> "$TMP/my_config.txt"
echo "GoogleTTS" >> "$TMP/my_config.txt"
echo "PackageInstallerGoogle" >> "$TMP/my_config.txt"
echo "BatteryUsage" >> "$TMP/my_config.txt"
echo "Speech" >> "$TMP/my_config.txt"
echo "#GooglePay" >> "$TMP/my_config.txt"
echo "Translate" >> "$TMP/my_config.txt"

# Locate gapps-config (if used)
for i in "$TMP/aroma/.gapps-config"\
"$TMP/my_config.txt"\
"$zip_folder/.gapps-config"\

[...]

root$ zip -0 -r ../my_gapps *
root$ cd ..
root$ adb sideload my_gapps.zip

And, now, let’s install Magisk: Apply from ADB.

root$ adb sideload Magisk-v20.4.zip

Once finished we can finally Go back -> Reboot system now.

Congratulations, your Sony Xperia XZ2 Compact Dual is now running LineageOS 17.1!!!

I have to say that, so far, I’m quite happy with the phone. It is a huge improvement for me, coming from a Xiaomi Redmi 2.

However, the camera has lost some enhanced functionality so let’s continue to bring back Sony’s stock camera app.

Installing LineageOS in the Sony Xperia XZ2 Compact Dual (in GNU/Linux) 2/5: Backing up the Trim Area (TA) partition

WARNING

I have no responsibility whatsoever if this guideline causes any harm to your device. The intention of these posts are solely as personal notes for myself. Follow them at your own risk.

WARNING

Through these steps I will unlock the phone’s bootloader, erasing all data. This includes the DRM keys stored in the Trim Area (TA) partition. I’ll attempt backing them up but, as of today, there is no way of restoring them to the previous state nor knowing if the actual backup is usable at all.

Without these DRM keys, several audio and video proprietary functionality provided by Sony won’t be available including some camera post-processing features, color gamut profiles, white balance, noise reduction, X-Reality Video Enhancement, DSEE HX, ClearAudio+, and Widevine L1 support for HD Netflix.

Backup the TA partition

As explained in the previous post, enable developer mode in the phone.

Following this guide, download the latest Magisk release. At the time of writing this it’s v20.4.

Download the tama-mroot.zip with the needed exploit.

Push both archives into the phone (you may need to give consent in a pop up dialog in the phone):

root$ adb push tama-mroot/tama-mroot.zip Magisk/Magisk-v20.4.zip /data/local/tmp
tama-mroot/tama-mroot.zip: 1 file pushed. 0.5 MB/s (21355 bytes in 0.039s)
Magisk/Magisk-v20.4.zip: 1 file pushed. 32.0 MB/s (5942417 bytes in 0.177s)
2 files pushed. 25.4 MB/s (5963772 bytes in 0.224s)

Get into the phone and follow the steps to get a root shell:

root$ adb shell
H8324:/ $ cd /data/local/tmp
H8324:/data/local/tmp $ unzip tama-mroot.zip
Archive: tama-mroot.zip
inflating: magisk-start.sh
inflating: magisk-setup.sh
inflating: tama-mroot
H8324:/data/local/tmp $ chmod 755 tama-mroot magisk-setup.sh magisk-start.sh
H8324:/data/local/tmp $ ./magisk-setup.sh

[...]

H8324:/data/local/tmp $ cd /data/local/tmp
H8324:/data/local/tmp $ ./tama-mroot

[...]

root_by_cve-2020-0041:/data/local/tmp # ./magisk-start.sh -1

[...]

root_by_cve-2020-0041:/data/local/tmp # ./magisk-start.sh -2

[...]

root_by_cve-2020-0041:/data/local/tmp # ./magisk-start.sh -3

[...]

We can verify now that we have really root privileges:

root_by_cve-2020-0041:/data/local/tmp # id
uid=0(root) gid=0(root) groups=0(root),1004(input),1007(log),1011(adb),1015(sdcard_rw),1028(sdcard_r),3001(net_bt_admin),3002(net_bt),3003(inet),3006(net_bw_stats),3009(readproc),3011(uhid) context=u:r:magisk:s0
root_by_cve-2020-0041:/data/local/tmp # uname -a
Linux localhost 4.9.186-perf+ #1 SMP PREEMPT Fri Jan 17 01:22:05 2020 aarch64

Hence, let’s go ahead and back up the TA partition:

root_by_cve-2020-0041:/data/local/tmp # dd if=/dev/block/bootdevice/by-name/TA of=TA-locked.img
4096+0 records in
4096+0 records out
2097152 bytes (2.0 M) copied, 0.039839 s, 50 M/s
root_by_cve-2020-0041:/data/local/tmp # chown shell:shell TA-locked.img
root_by_cve-2020-0041:/data/local/tmp # sync
root_by_cve-2020-0041:/data/local/tmp # sync

Now, from another terminal in the computer, pull the created backup:

root$ adb pull /data/local/tmp/TA-locked.img

That’s it, we have finished backing up the TA partition!

Now, we can continue to install the LineageOS system.