Replaying 3D traces with piglit


If you don’t know what is traces based rendering regression testing, read the appendix before continuing.


The Mesa community has witnessed an explosion of the Continuous Integration interest in the last two years.

In addition to checking the proper building of the project, integrating the testing of its functional correctness has become a priority. The user space graphics drivers exhibit a wide variety of types of tests and test suites. One kind of those tests are the traces based rendering regression testing.

The public effort to add this kind of tests into Mesa’s CI started with this mail from Alexandros Frantzis.

At some point, we had support for replaying OpenGL, Vulkan and D3D11 traces using apitrace, RenderDoc and GFXReconstruct with the in-tree tool tracie. However, it was a very custom solution made to the needs of Mesa so I proposed to move this codebase and integrate it into the piglit test suite. It was a natural step forward.

This is how replayer was born into piglit.

replayer

The first step to test a trace is, actually, obtaining a trace. I won’t go into the details about how to create one from scratch. The process is well documented on each of the tools listed above. However, the Mesa community has been collecting publicly distributable traces for a while and placing them in traces-db whose CI is copying them to Freedesktop.org’s MinIO instance.

To make things simple, once we have built and installed piglit, if we would like to test an apitrace created OpenGL trace, we can download from there with:

$ replayer.py download \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --db-path ./traces-db \
 	 --force-download \
 	 glxgears/glxgears-2.trace

The parameters are self explanatory. The downloaded trace will now exist at ./traces-db/glxgears/glxgears-2.trace.

The next step will be to dump an image from the trace. Since it is a .trace file we will need to have apitrace installed in the system. If we do not specify the call(s) from which to dump the image(s), we will just get the last frame of the trace:

$ replayer.py dump ./traces-db/glxgears/glxgears-2.trace

The dumped PNG image will be at ./results/glxgears-2.trace-0000001413.png. Notice, the number suffix is the snapshot id from the trace.

Dumping from a trace may result in a range of different possible images. One example is when the trace makes use of uninitialized values, leading to undefined behaviors.

However, since the original aim was performing pre-merge rendering regression testing in Mesa’s CI, the idea is that replaying any of the provided traces would be quick and the dumped image will be consistent. In other words, if we would dump several times the same frame of a trace with the same GFX stack, the image will always be the same.

With this precondition, we can test whether 2 different images are the same just by doing a hash of its content. replayer can obtain the hash for the generated dumped image:

$ replayer.py checksum ./results/glxgears-2.trace-0000001413.png 
f8eba0fec6e3e0af9cb09844bc73bdc8

Now, if we would build a different commit of Mesa, we could check the generated image at this new point against the previously generated reference image. If everything goes well, we will see something like:

$ replayer.py compare trace \
 	 --download-url https://minio-packet.freedesktop.org/mesa-tracie-public/ \
 	 --device-name gl-vmware-llvmpipe \
 	 --db-path ./traces-db \
 	 --keep-image \
 	 glxgears/glxgears-2.trace f8eba0fec6e3e0af9cb09844bc73bdc8
[dump_trace_images] Info: Dumping trace ./traces-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./traces-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./blog-traces-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

PIGLIT: {"images": [{"image_desc": "glxgears/glxgears-2.trace", "image_ref": "f8eba0fec6e3e0af9cb09844bc73bdc8.png", "image_render": "./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413-f8eba0fec6e3e0af9cb09844bc73bdc8.png"}], "result": "pass"}

replayer‘s compare subcommand is the one spitting a piglit formatted test expectations output.

Putting everything together

We can make the whole process way simpler by passing the replayer a YAML tests list file. For example:

$ cat testing-traces.yml
traces-db:
  download-url: https://minio-packet.freedesktop.org/mesa-tracie-public/

traces:
  - path: gputest/triangle.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: c8848dec77ee0c55292417f54c0a1a49
  - path: glxgears/glxgears-2.trace
    expectations:
      - device: gl-vmware-llvmpipe
        checksum: f53ac20e17da91c0359c31f2fa3f401e
$ replayer.py compare yaml \
 	 --device-name gl-vmware-llvmpipe \
 	 --yaml-file testing-traces.yml 
[check_image] Downloading file gputest/triangle.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/gputest/triangle.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/gputest/triangle.trace
// process.name = "/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest"
397 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)

510 glXSwapBuffers(dpy = 0x7f0ad0005a90, drawable = 56623106)


/home/anholt/GpuTest_Linux_x64_0.7.0/GpuTest
[dump_trace_images] Running: eglretrace --headless --snapshot=510 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace- ./replayer-db/gputest/triangle.trace
Wrote ./results/trace/gl-vmware-llvmpipe/gputest/triangle.trace-0000000510.png

OK
[check_image]
    actual: c8848dec77ee0c55292417f54c0a1a49
  expected: c8848dec77ee0c55292417f54c0a1a49
[check_image] Images match for:
  gputest/triangle.trace

[check_image] Downloading file glxgears/glxgears-2.trace took 5s.
[dump_trace_images] Info: Dumping trace ./replayer-db/glxgears/glxgears-2.trace...
[dump_trace_images] Running: apitrace dump --calls=frame ./replayer-db/glxgears/glxgears-2.trace
// process.name = "/usr/bin/glxgears"
1384 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)

1413 glXSwapBuffers(dpy = 0x56060e921f80, drawable = 31457282)


/usr/bin/glxgears
error: drawable failed to resize: expected 1515x843, got 300x300
[dump_trace_images] Running: eglretrace --headless --snapshot=1413 --snapshot-prefix=./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace- ./replayer-db/glxgears/glxgears-2.trace
Wrote ./results/trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace-0000001413.png

OK
[check_image]
    actual: f8eba0fec6e3e0af9cb09844bc73bdc8
  expected: f8eba0fec6e3e0af9cb09844bc73bdc8
[check_image] Images match for:
  glxgears/glxgears-2.trace

replayer features also the query subcommand, which is just a helper to read the YAML files with the tests configuration.

Testing the other kind of supported 3D traces doesn’t change much from what’s shown here. Just make sure to have the needed tools installed: RenderDoc, GFXReconstruct, the VK_LAYER_LUNARG_screenshot layer, Wine and DXVK. A good reference for building, installing and configuring these tools are Mesa’s GL and VK test containers building scripts.

replayer also accepts several configurations to tweak how to behave and where to find the actual tracing tools needed for replaying the different types of traces. Make sure to check the replay section in piglit’s configuration example file.

replayer‘s README.md file is also a good read for further information.

piglit

replayer is a test runner in a similar fashion to shader_runner or glslparsertest. We are now missing how does it integrate so we can do piglit runs which will produce piglit formatted results.

This is done through the replay test profile.

This profile needs a couple configuration values. Easiest is just to set the PIGLIT_REPLAY_DESCRIPTION_FILE and PIGLIT_REPLAY_DEVICE_NAME env variables. They are self explanatory, but make sure to check the documentation for this and other configuration options for this profile.

The following example features a similar run to the one done above invoking directly replayer but with piglit integration, providing formatted results:

$ PIGLIT_REPLAY_DESCRIPTION_FILE=testing-traces.yml PIGLIT_REPLAY_DEVICE_NAME=gl-vmware-llvmpipe piglit run replay -n replay-example replay-results
[2/2] pass: 2   
Thank you for running Piglit!
Results have been written to replay-results

We can create some summary based on the results:

# piglit summary console replay-results/
trace/gl-vmware-llvmpipe/glxgears/glxgears-2.trace: pass
trace/gl-vmware-llvmpipe/gputest/triangle.trace: pass
summary:
       name: replay-example
       ----  --------------
       pass:              2
       fail:              0
      crash:              0
       skip:              0
    timeout:              0
       warn:              0
 incomplete:              0
 dmesg-warn:              0
 dmesg-fail:              0
    changes:              0
      fixes:              0
regressions:              0
      total:              2
       time:       00:00:00

Creating an HTML summary may be also interesting, specially when finding failures!

Wishlist

  • Through different backends, replayer supports running apitrace, RenderDoc and GFXReconstruct traces. We may want to support other tracing tools in the future. The dummy backend used for functional testing is a good starting point when writing a new backend.
  • The solution chosen for checking whether we detect a rendering regression is dependent on having consistent results, as said before. It’d be great if we could add a secondary testing method whenever the expected rendered image is variable. From the top of my head, using exclusion masks could be a quick single-run solution when we know which specific areas in a rendered scenario are the ones fluctuating. For more complex variations, a multi-run based solution seems to be the best option. EzBench has a great statistical approach for this!
  • The current syntax of the YAML test list files implies running the compare subcommand with the default behavior of checking against the last frame of the tested trace. This means figuring out which call number is the one of the last frame first. It would be great to support providing the call numbers directly in the YAML files to be able to test more than just the last frame and, additionally, cut down the time taken to run the test.
  • The HTML generated summary allows us to see the reference and generated image during a test run side to side when it fails. It’d be great to have also some easy way of checking its differences. Using Rembrandt.js could be a possible solution.

Thanks a lot to the whole Mesa community for helping with the creation of this tool. Alexandros Frantzis, Rohan Garg and Tomeu Vizoso did a lot of the initial development for the in-tree tracie tool while Dylan Baker was very patient while reviewing my patches for the piglit integration.

Finally, thanks to Igalia for allowing me to work in this.


Appendix

In 3D computer graphics we say “traces”, for short, to name the files generated by 3D APIs capturing tools which store not only the calls to the specific 3D API but also the internal state of the 3D program during the capturing process: shaders, textures, buffers, etc.

Being able to “record” the execution of a 3D program is very useful. Usually, it will allow us to replay the execution without the need of the original program from which we generated the trace, it will also allow in-depth analysis for debugging and performance optimization, it’s a very good solution for sharing with other developers, and, in some cases, will allow us to check how the replay will happen with different GPUs.

In this post, however, I focus in a specific usage: rendering regression testing.

When doing a regression test what we would do is compare a specific metric obtained by replaying the trace with a specific version of the GFX software stack against the same metric obtained from a different version of the GFX stack. If the value of the metric changes we have found a regression (or an improvement!).

To make things simpler, we would like to check changes happening just in one of the many elements of the software stack. The most relevant component is the user space driver. In particular, I care about the Mesa drivers and the GNU/Linux stack.

Mainly, there are two kinds of regression testing we can do with a trace: performance or rendering regression testing. When doing a performance one, the checked metric(s) usually are in terms of speed or memory usage. In the case of the rendering ones what we would do is comparing the rendered output at one (or many) point during the trace replay. This output, a bitmap image, is the metric that we will compare in between two different points of the Mesa driver. If the images differ, we may have found a regression; artifacts, improper colors, etc, or an enhancement, if the reference image is the one featuring any of these problems.

Review of Igalia’s Graphics activities (2018)

This is the first report about Igalia’s activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

GL_ARB_gl_spirv and GL_ARB_spirv_extensions

GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel‘s i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:

ETC2/EAC

The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don’t have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel’s i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.

VK_KHR_16bit_storage

Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel’s Anvil driver.

This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects).  This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.

shaderInt16

Igalia added Vulkan’s optional feature shaderInt16 to Intel’s Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don’t need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

SPV_KHR_8bit_storage and VK_KHR_8bit_storage

SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

Igalia implemented its support into Intel’s Anvil driver.

VK_KHR_shader_float16_int8

Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel’s Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

The review process for the implementation is still ongoing and is on its way to land in Mesa.

VK_KHR_shader_float_controls

VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation’s default floating point behavior for rounding modes, denormals, signed zero and infinity.

Igalia has coded its support into Intel’s Anvil driver and it is currently under review before being merged into Mesa.

VkRunner

VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit’s shader_test format.

Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

Neil Roberts introduced VkRunner at XDC 2018. This is his talk:

freedreno

During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

Khronos Conformance

The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

The most important result out of this significant dedication has been successfully passing conformance applications.

OpenGL 4.6

Igalia helped making Intel’s i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

Vulkan 1.1

Igalia also helped to make Intel’s Anvil driver conformant with Vulkan 1.1 since day zero, too.

Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

Mesa Releases

Igalia continued the work that was already carrying on in Mesa’s Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.

CI

Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

XDC 2018

2018 was the year that saw A Coruña hosting the X.Org Developer’s Conference (XDC) and Igalia as Platinum Sponsor.

The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

Other activities

Conferences

As usual, Igalia was present in many graphics related conferences during the year:

New Igalians in the team

Igalia’s graphics team kept growing. Two new developers joined us in 2018:

  • Hyunjun Ko is an experienced Igalian with a strong background in multimedia. Specifically, GStreamer and Intel’s VAAPI. He is now contributing his impressive expertise into our Graphics team.
  • Arcady Goldmints-Orlov is the latest addition to the team. His previous expertise as a graphics developer around the nVIDIA GPUs fits perfectly for the kind of work we are pushing currently in Igalia.

Conclusion

Thank you for reading this blog post and we look forward to more work on graphics in 2019!

Igalia

matrix-send me a notification!

When you are working in the console of an Un*x system you always have the possibility of using some kind of notification system to warn you when a task has completed. Quite typically, that would involve an email that could arrive to your box’ local inbox or, if you have a mail agent properly configure, to some other inbox in the Internet.

With the arriving of the Instant Messaging systems you could somehow move from the good old email notification to some other fancy service. That has been my prefered method for quite a while since I understand email as a “non-instant” messaging system. Basically, I do not want to get instant notifications when a mail arrives. Add to that the hassle of setting some kind of filter criteria to get the notifications only for specific mail rules and the not yet universally supported IMAP4 push method, instead of pulling for newly arrived mail …

Anyway, long story short, for some time now we are using [matrix] as our Instant Messaging service at Igalia so, why not getting notifications there when a task is completed?

Yes, you have guessed correctly, that’s possible and, actually, it’s very easy to set up, specially with the help of matrix-send.

First, you need an account that will send you the notification(s). Ideally, that would be a bot user, but it could be any account. Then, you have get an access token with such user so you can interact with the matrix server from the command line as if it would be any other ordinary matrix client. Finally, you need to create a chat room between that user and your own in order to keep the communication ongoing. All this is explained in matrix’ client-server API documentation but, to make things easier, it would go as follows:

$ curl -XPOST -d '{"user":"", "password":"", "type":"m.login.password"}' "https:///_matrix/client/r0/login"
{
    "access_token": "",
    "device_id": "",
    "home_server": "",
    "user_id": "@:"
}

This will give you the needed access-token.

Now, from your regular matrix client, invite the bot user to a conversation in a new room. Check in the configuration of the new room for its internal ID. It would be something like
!<internal-id>:<home-server>.

Then, accept such invitation from the command line:

$ curl -XPOST -d '{}' "https:///_matrix/client/r0/rooms/%21:/join?access_token="
{
    "room_id": "!:"
}

All that is left is to configure matrix-send and start using it. Mind you, I’ve done a small addition that it has not been merged yet so I would just clone from my fork.

The configuration file would look like this:

$ cat ~/.config/matrix-send/config.ini
[DEFAULT]
endpoint=https:///_matrix/
access_token=
channel_id=!:
msgtype=m.text

The interesting addition from my own is the msgtype field. By default, in matrix-send its value is m.notice which, depending on the configuration, quite typically won’t trigger a notification in your matrix client.

All that is left is to make matrix-send executable and test it:

$ chmod +x /matrix-send.py
$ /matrix-send.py "Hello World!"

Switching between nouveau and the nVIDIA proprietary OpenGL driver in (Debian) GNU/Linux

So lately I’ve been devoting my time in Igalia around the GNU/Linux graphics stack focusing, more specifically, in Mesa, the most popular open-source implementation of the OpenGL specification.

When working in Mesa and piglit, its testing suite, quite often you would like to compare the results obtained when running a specific OpenGL code with one driver or another.

In the case of nVIDIA graphic cards we have the chance of comparing the default open source driver provided by Mesa, nouveau, or the proprietary driver provided by nVIDIA. For installing the nVIDIA driver you will have to run something like:

root$ apt-get install linux-headers nvidia-driver nvidia-kernel-dkms nvidia-xconfig nvidia-kernel-amd64

Changing from one driver to another involves several steps so I decided to create a dirty script for helping with this.

The actions done by this script are:

  1. Instruct your X Server to use the adequate X driver.
    These instructions apply to the X.org server only.
    When using the default nouveau driver in Debian, the X.org server is able to configure itself automatically. However, when using the nVIDIA driver you most probably will have to instruct the proper settings to X.org.
    nVIDIA provides the package nvidia-xconfig. This package provides a tool of the same name that will generate a X.org configuration file suitable to work with the nVIDIA X driver:

    root$ nvidia-xconfig 
    
    WARNING: Unable to locate/open X configuration file.
    
    Package xorg-server was not found in the pkg-config search path.
    Perhaps you should add the directory containing `xorg-server.pc'
    to the PKG_CONFIG_PATH environment variable
    No package 'xorg-server' found
    New X configuration file written to '/etc/X11/xorg.conf'
    

    I have embedded this generated file into the provided custom script since it is suitable for my system:

        echo 'Section "ServerLayout"
        Identifier     "Layout0"
        Screen      0  "Screen0"
        InputDevice    "Keyboard0" "CoreKeyboard"
        InputDevice    "Mouse0" "CorePointer"
    EndSection
    
    Section "Files"
    EndSection
    
    Section "InputDevice"
        # generated from default
        Identifier     "Mouse0"
        Driver         "mouse"
        Option         "Protocol" "auto"
        Option         "Device" "/dev/psaux"
        Option         "Emulate3Buttons" "no"
        Option         "ZAxisMapping" "4 5"
    EndSection
    
    Section "InputDevice"
        # generated from default
        Identifier     "Keyboard0"
        Driver         "kbd"
    EndSection
    
    Section "Monitor"
        Identifier     "Monitor0"
        VendorName     "Unknown"
        ModelName      "Unknown"
        HorizSync       28.0 - 33.0
        VertRefresh     43.0 - 72.0
        Option         "DPMS"
    EndSection
    
    Section "Device"
        Identifier     "Device0"
        Driver         "nvidia"
        VendorName     "NVIDIA Corporation"
    EndSection
    
    Section "Screen"
        Identifier     "Screen0"
        Device         "Device0"
        Monitor        "Monitor0"
        DefaultDepth    24
        SubSection     "Display"
            Depth       24
        EndSubSection
    EndSection
    ' > /etc/X11/xorg.conf
    

    I would recommend you to substitute this with another configuration file generated with nvidia-xconfig on your system.

  2. Select the proper GLX library.
    Fortunately, Debian provides the alternatives mechanism to select between one or the other.

    ALTERNATIVE=""
    

        ALTERNATIVE="/usr/lib/mesa-diverted"
    

        ALTERNATIVE="/usr/lib/nvidia"
    

    update-alternatives --set glx "${ALTERNATIVE}"
    
  3. Black list the module we don’t want the Linux kernel to load on start up.
    Again, in Debian, the nVIDIA driver package installs the file /etc/nvidia/nvidia-blacklists-nouveau.conf that is linked, then, from /etc/modprobe.d/nvidia-blacklists-nouveau.conf instructing that the open source nouveau kernel driver for the graphic card should be avoided.
    When selecting nouveau, this script removes the soft link creating a new file which, instead of black listing nouveau’s driver, does it for the nVIDIA proprietary one:

        rm -f /etc/modprobe.d/nvidia-blacklists-nouveau.conf
        echo "blacklist nvidia" > /etc/modprobe.d/nouveau-blacklists-nvidia.conf
    

    When selecting nVIDIA, the previous file is removed and the soft link is restored.

  4. Re-generate the image used in the inital booting.
    This will ensure that we are using the proper kernel driver from the beginning of the booting of the system:

    update-initramfs -u
    

With these actions you will be already able to switch your running graphic driver.

You will switch to nouveau with:

root$ /alternate-nouveau-nvidia.sh nouveau
update-alternatives: using /usr/lib/mesa-diverted to provide /usr/lib/glx (glx) in manual mode
update-initramfs: Generating /boot/initrd.img-3.17.0

nouveau successfully set. Reboot your system to apply the changes ...

And to the nVIDIA proprietary driver with:

root$ /alternate-nouveau-nvidia.sh nvidia
update-alternatives: using /usr/lib/nvidia to provide /usr/lib/glx (glx) in manual mode
update-initramfs: Generating /boot/initrd.img-3.17.0

nvidia successfully set. Reboot your system to apply the changes ...

It is recommended to reboot the system although theoretically you could unload the kernel driver and restart the X.org server. The reason is that it has been reported that unloading the nVIDIA kernel driver and loading a different one is not always working correctly.

I hope this will be helpful for your hacking time!

Side tabs in Empathy

Going quickly to the interesting part.

If you happen to use Ubuntu Saucy 13.10 and would like to have side tabs in Empathy, just write the following commands:


$ sudo add-apt-repository ppa:tanty/ppa

If, in addition to be using Ubuntu Saucy 13.10 you are using also GNOME3 Team’s PPA, you will need to run the following command:


$ sudo add-apt-repository ppa:tanty/gnome3

Finally, update your repositories, upgrade empathy and set the proper configuration:


$ sudo apt-get update && sudo apt-get install empathy
...
$ gsettings set org.gnome.Empathy.conversation tab-position 'left'

After this, you can just open the chat window in a new Empathy running instance and you should see something like this:

Side tabs in Empathy by ::Tanty::
Side tabs in Empathy, a screenshot by ::Tanty:: on Flickr.

Motivation

I’m a long time user of Jabber and Empathy. I use it for every day’s communications and, in Igalia, we have several internal rooms in which we coordinate ourselves. Because of the amount of rooms in which I am as a regular basis, Empathy’s chat window is unable to display the tabs of each of them in the top bar of the conversations.

This forces me either to split in different windows or just to navigate among them every now and then to check if there is any interesting update. Quite annoying 🙂 .

Some time ago, #586145 was filed requesting the possibility of having the chat room tabs not only displayed on top but also in other positions, specially in the side.

Hence, I decided to take the existing patch and perform some small changes to the work done by Neil Roberts in order to be able to have these side tabs.

With this new feature, you can change the position of the tabs just by changing a setting, as the position property is bond to it. If you want to set the tabs at ‘top’, ‘left’, ‘bottom’ or ‘right’, you should run, respectively:


$ gsettings set org.gnome.Empathy.conversation tab-position 'top'
$ gsettings set org.gnome.Empathy.conversation tab-position 'left'
$ gsettings set org.gnome.Empathy.conversation tab-position 'bottom'
$ gsettings set org.gnome.Empathy.conversation tab-position 'right'

Now, I’ve uploaded a new version of the patch and I’m waiting to pass the review process and land it.

This is a tiny enhancement on top of the great work that several GNOME developers have done in Empathy over the years. However, it is really making a difference to me so I’ve decided to share it quickly in case someone else would find it useful since it will take a while to come into the main distributions. Hence, I’ve ported it to the Empathy version I’m using in the Ubuntu Saucy 13.10 running on my desktop.

If you want to give it a try, just follow the instructions I’ve written at the beginning of this post.

Final notes

In addition to Empathy, you will be able to find in my PPAs:

  • A working (and custom) version of the faulty official icecc package with patches fixing LP#1182491.
  • A custom version of webkitgtk with patches fixing WK#115650 which will speed up opening new tabs in Web.

Enjoy!

Update: I’ve added recently empathy patched versions also for Ubuntu Trusty 14.04.

Update 2: I’ve added recently empathy patched versions also for Ubuntu Utopic 14.10.

Quickly publishing in your Ubuntu PPA

This is more a note pad for myself with quick instructions about how to upload a (usually patched) package to my own PPAs.

Patching an existing package

First thing is downloading the sources of the package from the repository that is providing the buggy binary package installed in my system.

For example, when patching webkitgtk, if my installed package is from a vanilla Ubuntu release, I only have to check that I have the source from the official Ubuntu repositories. However, if my installed package is from another PPA, I will have to check that I have the source from it or, if not, I would have to download the needed packages manually. Let’s assume my installed package is coming from the GNOME3 Team Ubuntu PPA:

$ cd ~/ppa/ && mkdir -p webkitgtk/gnome3 && cd webkitgtk/gnome3
$ apt-get source webkitgtk
Reading package lists... Done
Building dependency tree
Reading state information... Done
NOTICE: 'webkitgtk' packaging is maintained in the 'Git' version control system at:
git://git.debian.org/git/pkg-webkit/webkit.git
Need to get 9,440 kB of source archives.
Get:1 http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu/ saucy/main webkitgtk 2.3.2-1ubuntu6~saucy1 (tar) [9,353 kB]
Get:2 http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu/ saucy/main webkitgtk 2.3.2-1ubuntu6~saucy1 (diff) [82.2 kB]
Get:3 http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu/ saucy/main webkitgtk 2.3.2-1ubuntu6~saucy1 (dsc) [4,577 B]
Fetched 9,440 kB in 3s (2,769 kB/s)
gpgv: Signature made Sun 22 Dec 2013 01:34:25 AM EET using RSA key ID 153ACABA
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./webkitgtk_2.3.2-1ubuntu6~saucy1.dsc
dpkg-source: info: extracting webkitgtk in webkitgtk-2.3.2
dpkg-source: info: unpacking webkitgtk_2.3.2.orig.tar.xz
dpkg-source: info: unpacking webkitgtk_2.3.2-1ubuntu6~saucy1.debian.tar.gz
dpkg-source: info: applying 02_notebook_scroll.patch
dpkg-source: info: applying aarch64.patch
dpkg-source: info: applying fix-ppc.diff
dpkg-source: info: applying fix-aarch64.patch
dpkg-source: info: applying remove-use-lockfree-threadsaferefcounted.patch
dpkg-source: info: applying no-jit-build-failure.patch
dpkg-source: info: applying 0001-GTK-Fails-to-build-with-freetype-2.5.1.patch
dpkg-source: info: applying disable-jit-harder.patch
dpkg-source: info: applying fix-llint-c-loop.patch
dpkg-source: info: applying fix-armv7.patch
dpkg-source: info: applying bugzilla_clear_surface.patch
dpkg-source: info: applying ppc64el.patch
$ cd webkitgtk-2.3.2

Just in case, something I like to do is to add the code from the downloaded package to a local git:

$ git init
$ git add *
$ git commit -m "Initial commit"
...

Then, it is time to apply the needed changes to the source code. This is the reason why git comes handy, in case these changes are not trivial and they need actually some more work. When we are done with the changes, we have to add them to the debian package as an additional patch to the original source. We use dpkg-source for this:

$ dpkg-source --commit
dpkg-source: info: local changes detected, the modified files are:
 webkitgtk-2.3.2/Source/WebKit2/GNUmakefile.am
 webkitgtk-2.3.2/Source/WebKit2/GNUmakefile.list.am
 webkitgtk-2.3.2/Source/WebKit2/Shared/Plugins/Netscape/NetscapePluginModule.h
 webkitgtk-2.3.2/Source/WebKit2/Shared/Plugins/Netscape/x11/NetscapePluginModuleX11.cpp
 webkitgtk-2.3.2/Source/WebKit2/UIProcess/Plugins/gtk/PluginInfoCache.cpp
 webkitgtk-2.3.2/Source/WebKit2/UIProcess/Plugins/gtk/PluginInfoCache.h
 webkitgtk-2.3.2/Source/WebKit2/UIProcess/Plugins/unix/PluginInfoStoreUnix.cpp
Enter the desired patch name: 0001-GTK-WK2-Blocks-when-fetching-plugins-information.patch
...

We enter the patch name and the description of the changes:

$ cat debian/patches/0001-GTK-WK2-Blocks-when-fetching-plugins-information.patch
Description: [GTK][WK2] Blocks when fetching plugins information
 https://bugs.webkit.org/show_bug.cgi?id=115650
 .
 Patch by Carlos Garcia Campos.
 Reviewed by Gustavo Noronha Silva.
 .
 Use a persistent cache to store the plugins metadata to avoid
 having to load all the plugins everytime a plugin is used for the
 first time.
 .
 * GNUmakefile.am:
 * GNUmakefile.list.am:
 * Shared/Plugins/Netscape/NetscapePluginModule.h:
 * Shared/Plugins/Netscape/x11/NetscapePluginModuleX11.cpp:
 (WebKit::NetscapePluginModule::parseMIMEDescription): Make this
 method public.
 (WebKit::NetscapePluginModule::buildMIMEDescription): Added this
 helper to build the MIME description string.
 * UIProcess/Plugins/gtk/PluginInfoCache.cpp: Added.
 (WebKit::PluginInfoCache::shared):
 (WebKit::PluginInfoCache::PluginInfoCache):
 (WebKit::PluginInfoCache::~PluginInfoCache):
 (WebKit::PluginInfoCache::saveToFileIdleCallback):
 (WebKit::PluginInfoCache::saveToFile):
 (WebKit::PluginInfoCache::getPluginInfo):
 (WebKit::PluginInfoCache::updatePluginInfo):
 * UIProcess/Plugins/gtk/PluginInfoCache.h: Added.
 * UIProcess/Plugins/unix/PluginInfoStoreUnix.cpp:
 (WebKit::PluginInfoStore::getPluginInfo): Check first if we have
 metadata of the plugin in the cache and update the cache if we
 loaded the plugin to get its metadata.
...

Finally, we modify the release information adding or increasing the non-maintainer digit. For example, in this case the downloaded source version was 2.3.2-1ubuntu6~saucy1, so I’m setting 2.3.2-1ubuntu6~saucy1.1. Also, remember to provide the proper distribution name or to modify it when writing down the log of the changes. In this case, we are using saucy. Check also that you are using the proper email for the log. In my PPAs I use my personal one:

$ DEBEMAIL="my@personal.email" dch -n -D saucy
$ cat debian/changelog
webkitgtk (2.3.2-1ubuntu6~saucy1.1) saucy; urgency=low

  * Fixes #115650:
    - debian/patches/0001-GTK-WK2-Blocks-when-fetching-plugins-information.patch

 -- Andres Gomez   Wed, 19 Mar 2014 14:26:19 +0200
...

With this, we are done modifying the source of the package.

Importing patch alternative

Maybe this is a cleaner and quicker way of patching the downloaded sources. Instead of modifying the sources and running dpkg-source –commit, we can just import an existent patch that would apply on the source code.

To do this, we just have to run:

$ quilt import //my_patch.patch

This will also work in Debian packages for which version dpkg-source –commit won’t work. In addition, is the quickest way to reuse a patch from a package in a previous Ubuntu distribution into a newer one, for example.

From here we will retake the same steps than above to add the release information.

Building the source package

We just have to take into account that, when you have more than one GPG key available, the signature of the package will fail during the process, as in:

$ debuild -S -rfakeroot
...
Finished running lintian.
Now signing changes and any dsc files...
 signfile webkitgtk_2.3.2-1ubuntu6~saucy1.1.dsc Andres Gomez 
gpg: skipped "Andres Gomez ": secret key not available
gpg: /tmp/debsign.vhpVY32w/webkitgtk_2.3.2-1ubuntu6~saucy1.1.dsc: clearsign failed: secret key not available
debsign: gpg error occurred!  Aborting....
debuild: fatal error at line 1280:
running debsign failed

Hence, you have to provide the key id to use in the -k parameter.

In addition, if the sources used for the package are not coming from one of the official Ubuntu repositories you will need to provide also the sources when uploading to the PPA. For this, you have to pass the -sa parameter. For the used example, as we are taking the source from the GNOME3 Team Ubuntu PPA, we will pass this parameter as in:

$ debuild -S -sa -rfakeroot -k3FEA1034

While for other packages which we modify directly from the sources of the official packages provided by Ubuntu, we just use:

$ debuild -S -rfakeroot -k3FEA1034

Optional local build

A local build is not really necessary but it will tell you if your applied changes are breaking or not the compilation of the package.

The best way of doing a trustful local build is using pbuilder.

When using pbuilder we have to be sure that we are using the proper packages not only from Ubuntu’s official repositories but also from the PPAs our target PPA depends on and also our own PPA itself.

I’ve already created the tarballs with the chroot distributions for my own PPAs. However, in order to show an example, we would be using a line like the following one for creating a new tarball for my gnome3 PPA which depends in my ppa PPA and also in GNOME3 Team’s gnome3 PPA:

$ sudo pbuilder --create --distribution saucy --mirror "http://fi.archive.ubuntu.com/ubuntu/" --othermirror "deb http://fi.archive.ubuntu.com/ubuntu/ saucy restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-updates main restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-proposed main restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-security main restricted universe multiverse|deb http://archive.canonical.com/ubuntu saucy partner|deb http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu trusty main|deb http://ppa.launchpad.net/tanty/ppa/ubuntu saucy main|deb http://ppa.launchpad.net/tanty/gnome3/ubuntu saucy main" --basetgz //saucy-gnome3.tgz --buildplace //build --aptcache  "//aptcache/"

I make use of the <path_to_base_pbuilder> because by default it is all done at /var and I do not always have enough space there.

Once created, and following our example, we would be building our package for the target gnome3 PPA as follows:

$ sudo pbuilder --build --mirror "http://fi.archive.ubuntu.com/ubuntu/" --othermirror "deb http://fi.archive.ubuntu.com/ubuntu/ saucy main restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-updates main restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-proposed main restricted universe multiverse|deb http://fi.archive.ubuntu.com/ubuntu/ saucy-security main restricted universe multiverse|deb http://archive.canonical.com/ubuntu saucy partner|deb http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu trusty main|deb http://ppa.launchpad.net/tanty/ppa/ubuntu saucy main|deb http://ppa.launchpad.net/tanty/gnome3/ubuntu saucy main" --basetgz //saucy-gnome3.tgz --buildplace //build --aptcache  "//aptcache/" ../webkitgtk_2.3.2-1ubuntu6~saucy1.1.dsc

Now, it is just a matter of waiting and checking the results.

Uploading to your PPA

The final step is uploading the package with the new changes to your PPA.

I actually have one sandbox PPA per each stable PPA. These PPAs are not intended for the general users but for being able to play with the changes until I feel they are stable enough to be published in the stable PPAs. Hence, I have 4 PPAs:

  • ppa: Where I keep changes from official Ubuntu packages that are useful to me.
  • ppa-next: Not intended for general users. Where I keep unstable packages with the changes that I will move to the ppa one once I feel they are stable enough.
  • gnome3: Where I keep changes on packages which source has been obtained from the GNOME3 Team PPA.
  • gnome3-next: Not intended for general users. Where I keep unstable packages with the changes that I will move to the gnome3 one once I feel they are stable enough.

With this, during the first cycles of development I will be uploading the changes to my unstable PPAs before uploading them to the stables. For this example, I would be uploading first to the gnome3-next one:

$ dput ppa:tanty/gnome3-next ../webkitgtk_2.3.2-1ubuntu6~saucy1.1_source.changes

Once I’m happy enough I would be uploading the changes to the stable PPA:

$ dput -f ppa:tanty/gnome3 ../webkitgtk_2.3.2-1ubuntu6~saucy1.1_source.changes

The -f flag is avoid the error that is triggered when there is already a “log” file from a previous upload with dput of a certain “.changes” package.

With this, you only have to wait for the package to be built on the PPA bots, upload your repositories and upgrade:

$ sudo apt-get update && sudo apt-get upgrade

Enjoy your newly patched package!

What’s up with the scrollbar?

First, it was Ubuntu which innovated in the scrollbars creating a nice overlay, but making them unusable for those like me using a track pointer or a mouse without wheel.

Now, with GTK-3.0, the scrollbars have also changed their default behavior and when clicking above or below, the scrollbar moves immediately to that position.

Again, this makes it unusable unless you have a wheel in your mouse or have another fancy way of scrolling, like a touch pad.

I’m nowadays a proud owner of a Lenovo X220 and I use the track pointer included disabling the annoying touch pad thanks to the Touchpad Indicator GNOME extension. I say “annoying” because, when using the track pointer, I tend to touch every now and the the touch pad with unpredictable results.

So, with the new behavior and without the possibility of scrolling with a mouse wheel or a touch pad, viewports with a long extension are really difficult to browse with the pointer. This is the case for several of my mail folders in Evolution. As a result, I was getting nuts.

Therefore, I wanted to go back to the old behavior. This is: when clicking above the bar it would mean “PgUp” and when clicking below “PgDown”.

Fortunately, GTK-3.0 provides a way of tuning this. You have to add an option to its “settings.ini” file. If you want to apply it system wide, you will do it in “/etc/gtk-3.0/settings.ini” while if you want only to affect an user, you will do it in “~/.config/gtk-3.0/settings.ini”.

This is how it looks like:

Hope this helps to someone else! 🙂

Introducing Facerecognition Resetter Plugin for the Nokia N9

As my mate Simón was writing short time ago in his post Announcing the Gallery Tilt Shift plugin for the Nokia N9, we got published at Igalia some plugins for enhancing the experience of the built-in Gallery application in the N9/N950 through the Nokia Store: Enlarge & Shrink Plugin, Gallery Tilt Shift Plugin, and Facerecognition Resetter Plugin.

The Enlarge & Shrink Plugin is a filter developed by Antía Puentes for the built-in Gallery application which applies a radial distortion to a picture featuring an enlarge or shrink effect (also known as punch or pinch).

The Gallery Tilt Shift Plugin is a filter developed by Simón Pena for the built-in Gallery application which makes a picture look like a miniature.

Finally, Facerecognition Resetter Plugin was developed by me. It is a add-on for the built-in Gallery application which is not a real filter for the pictures. Instead, it is just a way of forcing the deletion or un/protection of the face recognition database through its usage from Gallery. The main reason for doing this is a well known bug in the face recognition feature.

If you are experiencing that the the N9 is not recognizing faces any more or it is not giving any more suggestions just install Facerecognition Resetter Plugin and click on the “Protect” button. You want to do this even if you are not suffering this problem since this will prevent it from appearing in the future.

BTW, comments and reviews in the Nokia Store will be welcomed 😀

But, specifically, why would we want to reset or un/protect the face recognition database? Or, actually, what the heck is that face recognition database? Let’s get to the beginning.

When Nokia released the PR1.2 update for the Harmattan platform they included a new feature which made the N9 to be the first smartphone with integrated automated face recognition.

This feature, when activated, let the Gallery or Camera application to automatically recognize faces on the pictures stored in the device, showing a white bubble with a question mark on top of the region detected as a face.

Clicking on such bubble you would be able to select one of your contacts to be assigned as the detected face.

The algorithm would be even learning as the user selected and assigned faces to contacts so at some point it would be also suggesting the proper contact for the detected faces. The user, then, would only have to double tap on the suggestion bubble to confirm such contact.

Everything seemed great but after a while, some users started to complain that this feature eventually stopped working. Either it was not suggesting anyone, when there were people tagged in a big number of pictures or it was just not recognizing faces any more.

As with any software, the face recognition feature contains bugs and this problem was the consequence of one that Nokia has not yet fixed to the current date.

The technical explanation is that the algorithm that performs the face detection relies in SQLite to store its learning parameters and contacts. This database is located at:

/home/user/.local/share/gallerycore/data/faces.db

This file and its directory are protected through the usage of the AEGIS “powered” gallerycoredata-user user and gallerycoredata-users group. Also, the file permissions mask for them are 070 in the case of the directory and 060 in the case of the file.

When doing transactions to the database file the SQLite driver may create some temporal files as the journal one, to be able to recover the database under disaster. This journal file gets the UID and GID of the running process and the permissions from a combination of the permissions of the original database file and the running process’ umask. As a consequence, the journal file usually has the permissions mask 040.

While using the Camera or Gallery application the SQLite file is open. Whether a disaster may happen, although we hold the journal file, the owner of that file is not able to read it. Hence, the SQLite database remains useless for the processes with the same UID than the owner of the journal file, even when they belong to the same group than that file.

What it happen afterwards is that the SQLite database remained waiting to be “recovered” using the journal file but as the journal could not be read, the face recognition algorithm could not provide the learned information and suggest contacts any more. The solution for this would have been as easy as to change the file permissions of the journal file but this is not even possible for the root user since only the gallerycore-user user and those belonging to the gallerycore-users group were allowed through AEGIS to read and change the files on the parent directory of the database file.

Hence, the only way of being able to do a hack that would solve this problem was that the actual application doing such changes would be either Gallery or Camera. Fortunately, Gallery had the possibility of being extended through plugins and that’s the reason why Facerecognition Resetter is such.

Following, you can watch a video featuring an usage introduction tutorial and a detailed explanation of its usage below it.

The plugin shows 3 buttons for its corresponding actions:

  • Reset the database: As simple as that. It will delete the directory and files containing all the information gathered through the face recognition algorithm. From that on, the face recognition feature will start to work again but the learning gotten previously and powering the suggestions will be lost.
  • Protect the database: This will correct the permissions of the directories and files containing all the information gathered through the face recognition algorithm. From that on, the face recognition feature will start to work again and the suggestions would have the learning gotten previously. The problem will not show up in the future ever again but the database will remain protected and only usable through Gallery and Camera (or any other application with the proper AEGIS tokens).
  • Unprotect the database: This will correct the permissions of the directories and files containing all the information gathered through the face recognition algorithm. From that on, the face recognition feature will start to work again and the suggestions would have the learning gotten previously. The problem will not show up in the future ever again and the database will be available to any other application that would like to make use of it.

The permissions get corrected when un/protecting since the plugin sets the SGID bit to the parent directory of the database file so any other files created under it will belong to the same group than the directory and not to the GID of the running process that created that file. Also, the database will have now the 660 mask so any temporal file created by the SQLite drive will attempt to keep the same mask.

And with this, we can keep enjoying the usage of the face recognition feature of the N9 and go to celebrate it with some beers!!! 😀

This and the other plugins are Open Source, so you can go to their page at GitHub: Enlarge & Shrink, Gallery Tilt Shift and Facerecognition Resetter

Also, don’t forget to take a look at all the applications published by Igalia at the Nokia Store

Download Facerecognition Resetter Plugin from Nokia Store

Igalia wallpapers

Igalia wallpaper for the N9/N950

“Igalia wallpaper for the N9/N950”

Some weeks ago we decided to do an upgrade to the information that we are showing in our Igalia’s website. Due to these changes, I had the chance to play a little bit with some new graphic material that was used in the upgrade.

As a result, I’ve created based on Opsou’s Pedro Figueras original idea some different wallpapers for most of my GNU/Linux powered devices.

Just click in the images and go to download them at their original resolution.

I’ve uploaded it to a public Git repository which you can download with the following command:

~#  git clone http://git.igalia.com/art/wallpapers.git
4x3 Igalia wallpaper

“4×3 Igalia wallpaper”

16x9 Igalia wallpaper

“16×9 Igalia wallpaper”

Igalia wallpaper for the N900/N810/N800/N770

“Igalia wallpaper for the N900/N810/N800/N770”

Attending the Automotive Linux Summit 2012

Next Wednesday I will be attending the Automotive Linux Summit 2012.

It will be a good time to meet the key people pushing the usage of Linux in the automotive arena and I hope to have a great time at the Heritage Motor Centre in Warwickshire.

If you happen to attend the event and want to have a good chat with an Igalian about any of the technologies in which we are strongly involved: WebKit, rendering, compilers, Grilo, GStreamer, the kernel, Qemu, Yocto, OSTree, Skeltrack, OpenCV, a11y, Qt, Gtk+ and so on. Just poke me whenever you see me around 😉