Replaying 3D traces with piglit


If you don’t know what is traces based rendering regression testing, read the appendix before continuing.


The Mesa community has witnessed an explosion of the Continuous Integration interest in the last two years.

In addition to checking the proper building of the project, integrating the testing of its functional correctness has become a priority. The user space graphics drivers exhibit a wide variety of types of tests and test suites. One kind of those tests are the traces based rendering regression testing.

The public effort to add this kind of tests into Mesa’s CI started with this mail from Alexandros Frantzis.

At some point, we had support for replaying OpenGL, Vulkan and D3D11 traces using apitrace, RenderDoc and GFXReconstruct with the in-tree tool tracie. However, it was a very custom solution made to the needs of Mesa so I proposed to move this codebase and integrate it into the piglit test suite. It was a natural step forward.

This is how replayer was born into piglit.

replayer

The first step to test a trace is, actually, obtaining a trace. I won’t go into the details about how to create one from scratch. The process is well documented on each of the tools listed above. However, the Mesa community has been collecting publicly distributable traces for a while and placing them in traces-db whose CI is copying them to Freedesktop.org’s MinIO instance.

To make things simple, once we have built and installed piglit, if we would like to test an apitrace created OpenGL trace, we can download from there with:

$ replayer.py download \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --db-path ./traces-db \
 	 --force-download \
 	 glxgears/glxgears-2.trace

The parameters are self explanatory. The downloaded trace will now exist at ./traces-db/glxgears/glxgears-2.trace.

The next step will be to dump an image from the trace. Since it is a .trace file we will need to have apitrace installed in the system. If we do not specify the call(s) from which to dump the image(s), we will just get the last frame of the trace:

$ replayer.py dump ./traces-db/glxgears/glxgears-2.trace

The dumped PNG image will be at ./results/glxgears-2.trace-0000001413.png. Notice, the number suffix is the snapshot id from the trace.

Dumping from a trace may result in a range of different possible images. One example is when the trace makes use of uninitialized values, leading to undefined behaviors.

However, since the original aim was performing pre-merge rendering regression testing in Mesa’s CI, the idea is that replaying any of the provided traces would be quick and the dumped image will be consistent. In other words, if we would dump several times the same frame of a trace with the same GFX stack, the image will always be the same.

With this precondition, we can test whether 2 different images are the same just by doing a hash of its content. replayer can obtain the hash for the generated dumped image:

$ replayer.py checksum ./results/glxgears-2.trace-0000001413.png 
f8eba0fec6e3e0af9cb09844bc73bdc8

Now, if we would build a different commit of Mesa, we could check the generated image at this new point against the previously generated reference image. If everything goes well, we will see something like:

$ replayer.py compare trace \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --device-name gl-vmware-llvmpipe \
 	 --db-path ./traces-db \
 	 --keep-image \
 	 glxgears/glxgears-2.trace f8eba0fec6e3e0af9cb09844bc73bdc8
[dump_trace_images] Info: Dumping trace ./traces-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./traces-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./blog-traces-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

PIGLIT: {"images": [{"image_desc": "glxgears/glxgears-2.trace", "image_ref": "f8eba0fec6e3e0af9cb09844bc73bdc8.png", "image_render": "./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413-f8eba0fec6e3e0af9cb09844bc73bdc8.png"}], "result": "pass"}

replayer‘s compare subcommand is the one spitting a piglit formatted test expectations output.

Putting everything together

We can make the whole process way simpler by passing the replayer a YAML tests list file. For example:

$ cat testing-traces.yml
traces-db:
  download-url: https://minio-packet.freedesktop.org/mesa-tracie-public/

traces:
  - path: gputest/triangle.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: c8848dec77ee0c55292417f54c0a1a49
  - path: glxgears/glxgears-2.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: f53ac20e17da91c0359c31f2fa3f401e
$ replayer.py compare yaml \
 	 --device-name gl-vmware-llvmpipe \
 	 --yaml-file testing-traces.yml 
[check_image] Downloading file gputest/triangle.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/gputest/triangle.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/gputest/triangle.trace
// process.name = "/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest"
397 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)

510 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)


/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest
[dump_trace_images] Running: eglretrace --headless --snapshot=510 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace- ./replayer-db/gputest/triangle.trace
Wrote ./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace-0000000510.png

OK
[check_image]
    actual: c8848dec77ee0c55292417f54c0a1a49
  expected: c8848dec77ee0c55292417f54c0a1a49
[check_image] Images match for:
  gputest/triangle.trace

[check_image] Downloading file glxgears/glxgears-2.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)


/usr/bin/glxgears
error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./replayer-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

replayer features also the query subcommand, which is just a helper to read the YAML files with the tests configuration.

Testing the other kind of supported 3D traces doesn’t change much from what’s shown here. Just make sure to have the needed tools installed: RenderDoc, GFXReconstruct, the VK_LAYER_LUNARG_screenshot layer, Wine and DXVK. A good reference for building, installing and configuring these tools are Mesa’s GL and VK test containers building scripts.

replayer also accepts several configurations to tweak how to behave and where to find the actual tracing tools needed for replaying the different types of traces. Make sure to check the replay section in piglit’s configuration example file.

replayer‘s README.md file is also a good read for further information.

piglit

replayer is a test runner in a similar fashion to shader_runner or glslparsertest. We are now missing how does it integrate so we can do piglit runs which will produce piglit formatted results.

This is done through the replay test profile.

This profile needs a couple configuration values. Easiest is just to set the PIGLIT_REPLAY_DESCRIPTION_FILE and PIGLIT_REPLAY_DEVICE_NAME env variables. They are self explanatory, but make sure to check the documentation for this and other configuration options for this profile.

The following example features a similar run to the one done above invoking directly replayer but with piglit integration, providing formatted results:

$ PIGLIT_REPLAY_DESCRIPTION_FILE=testing-traces.yml PIGLIT_REPLAY_DEVICE_NAME=gl-vmware-llvmpipe piglit run replay -n replay-example replay-results
[2/2] pass: 2   
Thank you for running Piglit!
Results have been written to replay-results

We can create some summary based on the results:

# piglit summary console replay-results/
trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace: pass
trace/gl-vmware-llvmpipe/gputest/triangle.trace: pass
summary:
       name: replay-example
       ----  --------------
       pass:              2
       fail:              0
      crash:              0
       skip:              0
    timeout:              0
       warn:              0
 incomplete:              0
 dmesg-warn:              0
 dmesg-fail:              0
    changes:              0
      fixes:              0
regressions:              0
      total:              2
       time:       00:00:00

Creating an HTML summary may be also interesting, specially when finding failures!

Wishlist

  • Through different backends, replayer supports running apitrace, RenderDoc and GFXReconstruct traces. We may want to support other tracing tools in the future. The dummy backend used for functional testing is a good starting point when writing a new backend.
  • The solution chosen for checking whether we detect a rendering regression is dependent on having consistent results, as said before. It’d be great if we could add a secondary testing method whenever the expected rendered image is variable. From the top of my head, using exclusion masks could be a quick single-run solution when we know which specific areas in a rendered scenario are the ones fluctuating. For more complex variations, a multi-run based solution seems to be the best option. EzBench has a great statistical approach for this!
  • The current syntax of the YAML test list files implies running the compare subcommand with the default behavior of checking against the last frame of the tested trace. This means figuring out which call number is the one of the last frame first. It would be great to support providing the call numbers directly in the YAML files to be able to test more than just the last frame and, additionally, cut down the time taken to run the test.
  • The HTML generated summary allows us to see the reference and generated image during a test run side to side when it fails. It’d be great to have also some easy way of checking its differences. Using Rembrandt.js could be a possible solution.

Thanks a lot to the whole Mesa community for helping with the creation of this tool. Alexandros Frantzis, Rohan Garg and Tomeu Vizoso did a lot of the initial development for the in-tree tracie tool while Dylan Baker was very patient while reviewing my patches for the piglit integration.

Finally, thanks to Igalia for allowing me to work in this.


Appendix

In 3D computer graphics we say «traces», for short, to name the files generated by 3D APIs capturing tools which store not only the calls to the specific 3D API but also the internal state of the 3D program during the capturing process: shaders, textures, buffers, etc.

Being able to «record» the execution of a 3D program is very useful. Usually, it will allow us to replay the execution without the need of the original program from which we generated the trace, it will also allow in-depth analysis for debugging and performance optimization, it’s a very good solution for sharing with other developers, and, in some cases, will allow us to check how the replay will happen with different GPUs.

In this post, however, I focus in a specific usage: rendering regression testing.

When doing a regression test what we would do is compare a specific metric obtained by replaying the trace with a specific version of the GFX software stack against the same metric obtained from a different version of the GFX stack. If the value of the metric changes we have found a regression (or an improvement!).

To make things simpler, we would like to check changes happening just in one of the many elements of the software stack. The most relevant component is the user space driver. In particular, I care about the Mesa drivers and the GNU/Linux stack.

Mainly, there are two kinds of regression testing we can do with a trace: performance or rendering regression testing. When doing a performance one, the checked metric(s) usually are in terms of speed or memory usage. In the case of the rendering ones what we would do is comparing the rendered output at one (or many) point during the trace replay. This output, a bitmap image, is the metric that we will compare in between two different points of the Mesa driver. If the images differ, we may have found a regression; artifacts, improper colors, etc, or an enhancement, if the reference image is the one featuring any of these problems.

Installing LineageOS in the Sony Xperia XZ2 Compact Dual (in GNU/Linux) 4/5: Bringing back Sony’s stock camera app

WARNING

I have no responsibility whatsoever if this guideline causes any harm to your device. The intention of these posts are solely as personal notes for myself. Follow them at your own risk.

WARNING

Through these steps I will unlock the phone’s bootloader, erasing all data. This includes the DRM keys stored in the Trim Area (TA) partition. I’ll attempt backing them up but, as of today, there is no way of restoring them to the previous state nor knowing if the actual backup is usable at all.

Without these DRM keys, several audio and video proprietary functionality provided by Sony won’t be available including some camera post-processing features, color gamut profiles, white balance, noise reduction, X-Reality Video Enhancement, DSEE HX, ClearAudio+, and Widevine L1 support for HD Netflix.

Bringing back the stock camera

In the previous posts we have downgraded the stock firmware from Sony, backed up the Trim Area (TA) partition and installed LineageOS.

Thanks to the great people from the xda-developers forum we have the chance to add Sony’s stock camera app. We will adb sideload it the same way we installed Magisk in the previous post, for example

First, the zip is called SemcCamera (SemcCamera-xz2c-52.1.A.2.1.zip at the moment of writing this) and it is, currently, the only add-on available for the Official LineageOS 17.1 image for the xz2c phone.

We download the file, reboot into Recovery Mode and plug the phone to the computer with the USB cable. Select Apply Update -> Apply from ADB:

root$ adb sideload SemcCamera-xz2c-52.1.A.2.1.zip
Total xfer: 1.00x

Now, Go back -> Reboot system now.

Currently, the stock camera won’t work out of the box. It needs to disable SELinux or set as Permissive. Luckly, since we have Magisk installed and we can grant root privileges, we can install SELinuxModeChanger and do so.

That’s it, now you should be able to use Sony’s stock camera!

Extra treat: add Sony’s Bokeh app

Sony also provides a nice application for taking fancy photos: Bokeh (Background defocus).

Unfortunately, we cannot install it just from Google’s Play Store since it claims that the app is not compatible with this phone.

However, we can force the installation, for example, using the Aurora Store.

Finally, if you want to know about some bumps I got during the road, continue to the Appendixes.

Review of Igalia’s Graphics activities (2018)

This is the first report about Igalia’s activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

GL_ARB_gl_spirv and GL_ARB_spirv_extensions

GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel‘s i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:

ETC2/EAC

The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don’t have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel’s i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.

VK_KHR_16bit_storage

Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel’s Anvil driver.

This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects).  This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.

shaderInt16

Igalia added Vulkan’s optional feature shaderInt16 to Intel’s Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don’t need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

SPV_KHR_8bit_storage and VK_KHR_8bit_storage

SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

Igalia implemented its support into Intel’s Anvil driver.

VK_KHR_shader_float16_int8

Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel’s Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

The review process for the implementation is still ongoing and is on its way to land in Mesa.

VK_KHR_shader_float_controls

VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation’s default floating point behavior for rounding modes, denormals, signed zero and infinity.

Igalia has coded its support into Intel’s Anvil driver and it is currently under review before being merged into Mesa.

VkRunner

VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit’s shader_test format.

Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

Neil Roberts introduced VkRunner at XDC 2018. This is his talk:

freedreno

During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

Khronos Conformance

The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

The most important result out of this significant dedication has been successfully passing conformance applications.

OpenGL 4.6

Igalia helped making Intel’s i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

Vulkan 1.1

Igalia also helped to make Intel’s Anvil driver conformant with Vulkan 1.1 since day zero, too.

Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

Mesa Releases

Igalia continued the work that was already carrying on in Mesa’s Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.

CI

Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

XDC 2018

2018 was the year that saw A Coruña hosting the X.Org Developer’s Conference (XDC) and Igalia as Platinum Sponsor.

The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

Other activities

Conferences

As usual, Igalia was present in many graphics related conferences during the year:

New Igalians in the team

Igalia’s graphics team kept growing. Two new developers joined us in 2018:

  • Hyunjun Ko is an experienced Igalian with a strong background in multimedia. Specifically, GStreamer and Intel’s VAAPI. He is now contributing his impressive expertise into our Graphics team.
  • Arcady Goldmints-Orlov is the latest addition to the team. His previous expertise as a graphics developer around the nVIDIA GPUs fits perfectly for the kind of work we are pushing currently in Igalia.

Conclusion

Thank you for reading this blog post and we look forward to more work on graphics in 2019!

Igalia

Cómo ver A La Carta en tu Samsung SmartTV si vives en el extranjero

O, en otras palabras, cómo poder instalar las aplicaciones de España en una Samsung SmartTV si es que vives en otro país.

Tal y como mencionaba en una entrada anterior, soy el dueño de una Samsung 3D SmartTV que compré en Finlandia, el país dónde resido.

Este tipo de «Smart TV» disponen de una tienda de aplicaciones proporcionada por Samsung. Entre estas aplicaciones se encuentra una para el servicio A la carta de RTVE.es.

Entonces, ¿cuál es el problema?

Pues que Samsung, entiendo que en su celo por proteger derechos de autor por limitación geográfica, incluye sólo en una categoría local las aplicaciones disponibles para el mercado español. En otras palabras, sólo podrás instalar aplicaciones para España si tu televisor lo compraste en España. En mi caso, como vivo y compré mi televisor en Finlandia, no puedo instalarme la aplicación de RTVE.es … o eso se supone.

La verdad es que esta limitación es bastante estúpida.

  • Por un lado, servicios de streaming como «A la carta», u otras emisoras de televisión, ya tienen algún tipo de limitación geográfica mediante la identificación de IP. Es decir, si detectan que te estás conectando desde otro país que no es España, no te servirán el video que pides.
  • Por otro, es posible cambiar la configuración de tu SmartTV para que use la tienda de aplicaciones de España. La pega es que, si escoges un país distinto, entonces no podrás hacer uso de la tienda de aplicaciones del país donde resides. Es más, si tienes instaladas aplicaciones de otro país y cambias la tienda de tu SmartTV a España, ¡la TV desinstalará de forma automática las aplicaciones no disponibles para España!.
  • Finalmente, también es posible que los responsables de la aplicación de RTVE.es para la tienda de Samsung no hayan sido conscientes de que la aplicación no estaría disponible de forma internacional y todo se solucionaría de manera más sencilla, con una petición por parte de RTVE.es de que su aplicación no sólo esté disponible localmente.

Lo mires por donde lo mires, esta limitación no tiene ni pies ni cabeza y no está consiguiendo que le tenga mucho aprecio a los productos de Samsung.

Vamos a lo que interesa, ¿cómo narices hago que mi Samsung SmartTV use la tienda de aplicaciones de España?

Lo primero, la receta que sigue a continuación es para la serie de SmartTVs cuya numeración «empieza» por «F» o «H», para otras series de modelos puedes consultar aquí. La mía es una UE46F6400AW. Los 4 primeros caracteres indican Unión Europea (UE) y 46″ (el tamaño de la pantalla). El resto es el modelo concreto: F6400AW.

Los pasos son:

  1. Si no estás ya viendo la señal de TV, escoge ésta como fuente.
  2. Pulsa «Menu» en el mando a distancia.
  3. Navega en el menú hasta «System».
  4. En el submenú de «System», escoge «Setup».
  5. Continúa a través de las pantallas de configuración hasta parar en la que se solicita confirmación para los términos y condiciones (Smart Hub Terms & Conditions, Privacy Policy).
  6. En esta pantalla, pulsa la siguiente combinación de teclas del mando a distancia:

    Samsung SmartTV: F series, country selection keys combination

  7. Aparecerá un listado de países en el que podremos seleccionar España (o cualquier otro en el que estemos interesados).
  8. Continua y finaliza el proceso de configuración.

¡Hecho!

Ahora ya podrás navegar en el «Smart Hub» de tu televisón a la tienda de aplicaciones en la que podrás ver las aplicaciones de España, entre ellas, la de RTVE.es.