Welcome to EnDavid.com. You can find here a compendium of things that I have published and other stuff I made during my spare time.

If you get lost, try visiting the Site Map.

In this main page, you can find my main blog, where I keep track of the updates of this site, and post some technical articles from time to time. If you are interested, just subscribe to the RSS feed.


2018 Retrospective
Sun, 30 Dec 2018 21:18:27 +0000

Achievements

This year I spent some time investigating color and writing about it in a series of blog posts:

I also got contacted by a guy who was writing a blog post at Medium and wanted some help with display-P3. I reviewed his post and gave him some advice. Here's his final post: Hello Triangle, Meet Swift! (And Wide Color)

My investigations culminated in some code for my research Metal-based rendering engine, VidEngine, and a small app for iOS to generate sRGB and display-P3 color palettes: Palettist. (Hashtag: #Palettist)

Apart from that app, though, I haven't released many things this year, though. Mostly updates,

  • Complex Feelings: small functionality update (#ComplexFeelings #SomosSentimientos #モヤモヤ診断);
  • Coloroscope: support for all devices & orientations, including iPhoneX, landscape mode, and split screen on iPad #Coloroscope;
  • Snake on a sphere: support for iPhoneX;
  • Swift Pixels: new shapes, a visual "undo" history, and better layout for all screen sizes, including split screen. I'm quite happy with the responsive interface I created; check the screenshots in this tweet. (#SwiftPixels)
  • Silabitas: I released 20 new stages for its 2nd anniversary, with new crossword-like hints. (#Silabitas)
  • Sil·labetes: this is a new game, which it's basically Silabitas but with the target language being Catalan (the game itself is localized to Spanish, English, Japanese, and Catalan). It's free and it has only 20 stage at the moment (and I realized that the interpunct or "punt volat" can't be part of a hashtag: #Sil·labetes fail)

I'm also happy with this blogpost where I summarized the efforts of automating the screenshots and video previews for Silabitas.

After summer I haven't done much development. I was slightly demotivated after the release of Sil·labetes. Although I never do much marketing efforts, I did contact several Catalan language groups. Since the game is free, I expected people would at least download, but from a total of 198,931 impressions to date (more than all my other apps combined), I only got 92 downloads. I'd be happy with 92 downloads, but if I look at the hi-scores, it seems only 9 people have played past the 1st stage, mostly friends or family.

So after summer I've tried focusing my creative efforts to blogging and vlogging, mostly in Spanish though. This post about the failure of game indie development, El fracaso de un indie, was the most visited post of the year (not much, 139 views, but more than any of my games...)

My videos in youtube are mostly of humorist nature, and I don't get many views (30 on average), but, again, that's more than the downloads I get for any of my games in a year... My most visited video (91 views so far) is one where I made fun of Article 13. It's really silly and embarrassing, but I like Apple's Memoji expressiveness, and it serves me as a therapy to say all the silly things I want in Spanish from time to time (I mostly use Japanese at home).

If I have an idea for a small game, it may take some months to complete, for less than 30 downloads. In comparison, having an idea for a video, spending a few hours to complete it, and having 30 visits, is more rewarding in comparison. That doesn't mean I've stopped game & app development. I plan to resume when I find the self-motivation again.

Power up

Between work and my indie development, I practiced mostly 4 programming languages,

  • Swift. I try keeping all my personal projects up to date with the latest version of Swift and I refactor from time to time.
  • C#. I haven't done much C# this year, but I did some big-win refactoring of one of our services in the office, bringing down the boot up time from 3 minutes to 2 seconds, by using lazy initializers and better math.
  • C++. I started working on a C++ project in the office using C++11 and following the book Effective Model C++. I think I got a better understanding of type inference and move semantics.
  • Javascript. I revived an old project, the WebGL model viewer (source in Github). I've always used Javascript to prototype things quickly, but I never worked on a big project. But I've started unit-testing this one, and also modernizing it. I used to follow the book Effective Javascript (see my notes), but I've starting dropping compatibility with older standards in order to increase readability and be more modern. So I'm making heavy use of modules, Promises, and newer keywords like class for the first time.

I've also practiced reading Japanese, that I don't do that often these days. I've read more than 20 volumes of Battle Angel Alita (James Cameron has made a movie based on it, coming Feb. 2019), 20 volumes of Uramiya Honpo, and several other manga (all through the Japanese Amazon Kindle store). Reading is not that difficult, but I also did a fan translation of the first story of Uramiya Honpo, which was a good exercise for practicing.

I also tried drawing some pixel art from time to time for Pixel Dailies: #pixel_dailies from:@endavid (I've drawn more, but they don't appear in the search for some reason). I use my app Swift Pixels for all the drawings.

Apart from that, I guess I've practiced some rudimentary video editing (with iMovie, though... 😅)

Power down

I gave up in some other things... I need some focus... 😅

  • Piano. I decided I was going to stop practicing until I felt like playing it. At this point, I practiced because I felt stressed that after so much effort, if I stopped practicing, I was going to forget everything. But I should play for fun, not because I feel stressed... So I'll forget it and learn it again when I feel like it.
  • Reading. I've done technical reading, and I've read a couple of "coaching" books in Catalan and Spanish. I've also read lots of Japanese comic books, but no novels.

Fun

Trips

  • I visited Barcelona several times to visit my family. They also prepared an awesome party and a surprise party for my 40th birthday 😃. We also visited Majorca during summer for holidays.
  • Short holiday trips Majorca, and Bologna.
  • I was in Tokyo for 3 weeks, working remotely, but mostly for leisure. We also visited Hakodate during that time.

My favorite movies this year: The shape of water, and Bohemian Rhapsody.

My favorite band this year: Wednesday Campanella, for both the music and the visuals of their videos.

And I didn't have time to play "proper" games (as in AAA console games), so I have a long list of "games I want to play", but during Xmas I managed to clear Beyond: Two Souls, which was very good, and I've just started playing the new God of War on PS4. I've also cleared a couple of nice indie games: Inside, on PS4, and Undertale, on PS Vita. On iOS, I've cleared a game called Slime Pizza, a quite smart platformer, and I've also played a couple of cute platform games, Super Cat Tales and Cat Bird. But my biggest recent discovery is a game called Flipping Legend.

And that's been my 2018! Lots of Youtube procrastination at home, as always, but I try to keep some creativity alive.

Happy new year everyone! Thanks for reading! ❤️


A quest for automated iOS marketing shots
Sun, 29 Jul 2018 10:16:23 +0100

Motivation for automation

Legend has it that if a person happens to find your app or game in the App Store, they'll be put off if the video previews and screenshots are not in their language. And even if the app is indeed localized, it won't count as localized if you don't prove this with screenshots.

The other troublesome legend is that successful apps have responsive interfaces that adapt well to different screen sizes and aspect ratios.

So for Silabitas I spend time to adjust the font size, and the location of the different UI elements for all the different iPad and iPhone screens, and localized it to all the languages I know: English, Japanese, Spanish, and Catalan. The first version of Silabitas was also translated to German, Chinese, French, Portuguese, and Italian. But I dropped support for those because it was too much cost, and there were 0 users anyway.

But even with just 3 languages (Catalan is not supported in the App Store, so I don't need to take screenshots for that one), if I want to take 6 screenshots for my game, that means 6 screenshots × 3 languages × 7 screens (5.8'', 5.5'', 4.7'', 4'', 12.9'', 10.5'', 9.7''). That's 126 screenshots. I used to take those manually, but that's a lot. And if you want to make at least one video preview per language, that will be 21 videos.

For Silabitas latest update I thought it was time I automated this because it's a pain. I read about fastlane and I thought it would be all easy-peasy, but it wasn't that easy... I'll describe here the different step and/or approaches for automating the process as much as possible.

Screenshots with Fastlane

Just follow the documentation to set it up. It's all quite easy, but I encountered a couple of issues. The first one is that multiple languages are not supported for Swift, apparently. I found out that you can simply create a "Snapfile" inside the fastlane folder for it to work. So ignore that "Snapfile.swift" that gets created in that folder, and put your list of devices and languages in the "Snapfile",

devices([
   "iPhone 5s",
   "iPhone 8",
   "iPhone 8 Plus",
   "iPhone X",
   "iPad Pro (12.9-inch)",
   "iPad Pro (10.5-inch)",
   "iPad Pro (9.7-inch)"
])

languages([
        "en-US",
        "es-ES",
        "ja"
])

Then, you create some UITests that call the snapshot function in the right timings. To generate all the snapshots, you call fastlane snapshot from the command line. It will start the appropriate simulators in the different languages, so the process can take a couple of hours. Once it's done, it will create a summary page in HTML and a report in the console. See below,

Now you can try to setup Delivery to automatically upload the screenshots to App Store Connect. Again, the setup for Swift failed to create the appropriate files, it seems, so I had to manually create a DeliverFile inside the fastlane folder. Couldn't find much information about the format, but I set up these 3 things,

username("my developer account")
app_identifier("the Bundle Identifier of the app")
overwrite_screenshots(true)

When I call fastlane deliver it generates another html with a preview of what's gonna get uploaded, but the screenshots point to the wrong place. I have to manually move the Preview file to the right folder in order to see the summary properly. But if I accept the changes an continue the upload, I always get several errors in the console,

I searched in several forums, but I couldn't figure out how to resolve this. In the end I gave up, and I uploaded the screenshots through the App Store Connect website...

So I couldn't fully automate this, but at least I managed to automate the most tedious process of taking the screenshots. Uploading these manually through the Media Manager in App Store Connect means drag&dropping 7 sets of 6 images (multiple select the 6 screenshots) for all the languages, so 21 drag&drop actions. Not ideal, but I'll live with that.

Screenshots for SpriteKit

The challenge I had for Silabitas is that it uses SpriteKit and there's no immediate support for UI testing. But before addressing SpriteKit, make sure that if you have any UIButtons, they all have a unique accessibilityLabel set up. If they don't have a label, you can access a button from a UI test by its text (if it has text), or the name of the image it displays. But if you've localized your game, these texts and images will probably change with each locale, so the tests will fail. Use accessibility labels instead.

For SpriteKit you will have to manually create accessibility elements at the same locations as your sprites, and assign them unique labels that you can identify. I followed this nice post from StackOverflow: Step by step guide to create accessibility elements for SpriteKit. By the way, once you have this set up, I assume you can use VoiceOver to control your app, so it's a double-win, but I haven't actually tried.

When you create a UI test in Xcode, there's a record button that appears when you place the cursor inside a test function. If you click record, it will record all the interactions that you do, mainly button clicks. However, it fails to capture the accessibility elements for SpriteKit that we've just created. But if you gave them proper names, you can create the test by hand. In the case of Silabitas, you first select a "piece" with a number which is its position in the list of pieces for that screen, and then click on the board, that I indicate with the name "cell" plus 2 integers for X and Y. So one of my tests looks like this,

    func testSortija() {
        let app = XCUIApplication()
        app.buttons["startButton"].tap()
        app.buttons["Level2"].tap()
        app.buttons["stage.0"].tap()
        app.buttons["piece.8"].tap()
        app.buttons["cell.4.1"].tap()
        app.buttons["piece.3"].tap()
        app.buttons["cell.5.2"].tap()
        snapshot("4StageSortija")
    }

Attempt to automate video captures with simctl

Now the most challenging part. I used to create all videos manually, on a real device, or playing in the Simulator and recording them with simctl as I played. Then, I would use iMovie to create an App Preview. I manually edited the timing and the transitions there (an App Preview can't be longer than 30 seconds). But that meant I usually created the videos in just one language, English, and use the same one for all the other languages. I'm happy to give up on nice editing on favor of an automatic process for all languages.

So I created a UI test that would be my demo session. In Xcode, I created a separate Scheme, and for the testing of this scheme, I selected only this demo test. This is so when I run the tests from the command line, only this test is called. Check this post. So I created this simple script to start the test in a Simulator, and then wait for the simulator to boot to start a recording with simctl,

xcodebuild -project Silabitas.xcodeproj \
           -scheme "SilabitasTest" \
           -destination 'platform=iOS Simulator,name=iPhone 6,OS=11.4' \
           test&

# https://coderwall.com/p/fprm_g/chose-ios-simulator-via-command-line--2
# wait until there is a device booted
count=`xcrun simctl list | grep Booted | wc -l | sed -e 's/ //g'`
while [ $count -lt 1 ]
do
    sleep 1
    count=`xcrun simctl list | grep Booted | wc -l | sed -e 's/ //g'`
done
echo "Recording video... Press CTRL+C to stop..."
xcrun simctl io booted recordVideo silabitas-en-iPhone6.mov

Unfortunately, this produces blank videos. I'm not sure why. At first I thought it was because I was running it in background mode, but I also read there's a bug in Xcode 9.4.1: https://stackoverflow.com/a/51120591/1765629. So I installed Xcode 10 beta 4, and then I was able to capture, but the capture result was glitchy. I think it's out of sync or something,

I tried starting the UI test from Xcode, and then manually call simctl from the command line as soon as my app starts, but I also get glitchy videos. I could try starting a real device, but then I wasn't sure how I would capture. Even if I could script Quicktime to do the screen capture, as soon as Quicktime gets hold of the device, it disconnects it from Xcode, so the UI test would get interrupted 😅... So I decided I better give up on this route.

Automating video captures with ReplayKit

So the only alternative I think I had left was to take the captures from inside the app. This has the advantage that the capture can be scripted from the UI test. I've checked several sources, but this post was the most clear and straightforward.

In my UI tests I pass a special argument to my app like this,

app.launchArguments.append("--uitesting")

I use this in my AppDelegate to set up a demo account (so I don't log in into GameCenter) that has several stages unlocked. But also, I use it to add a special button to the main screen with an accessibility label called "Record" that I can easily call from my UI test. My UI test is wrapped with this,

   app.buttons["Record"].tap()
   sleep(3)
   if app.alerts.count == 1 {
       app.alerts["Allow screen recording in “Silabitas”?"].buttons["Record Screen"].tap()
   }
   // play...
   // [...]
   // stop recording and save the video
   app.buttons["Record"].tap()
   sleep(5)
   app.toolbars["Toolbar"].buttons["Save"].tap()
   sleep(4)
If you've given permissions once, you may not get the alert. Hence the "alerts.count" guard. The other problem here seems to be localization, since I'm not sure the system alert asking for recording permission has a generic accessibility label. However, I found out that for automating the localization of the video recordings it is better to have your device set to English, and then change the localization settings when launching the app programmatically, so you can have a unit test per language and record all the languages in one session. The system will be in English, so the system alert will still show in English, but your game or app will appear localized. Convenient ☺️. See the UI test code below,

private func launch(app: XCUIApplication, lang: SupportedLang) {
    var language = ""
    var locale = ""
    switch(lang) {
    case .en:
        language = "(en)"
        locale = "en_US"
    case .ja:
        language = "(ja)"
        locale = "ja_JP"
    case .es:
        language = "(es)"
        locale = "es_ES"
    }
    app.launchArguments += ["--uitesting"]
    app.launchArguments += ["-AppleLanguages", language]
    app.launchArguments += ["-AppleLocale", locale]
    app.launch()
}
func testGameSessionEnglish() {
    let app = XCUIApplication()
    launch(app: app, lang: .en)
    recordGameSession()
}
func testGameSessionSpanish() {
    let app = XCUIApplication()
    launch(app: app, lang: .es)
    recordGameSession()
}
func testGameSessionJapanese() {
    let app = XCUIApplication()
    launch(app: app, lang: .ja)
    recordGameSession()
}
private func recordGameSession() {
    let app = XCUIApplication()
    app.buttons["Record"].tap()
    // ... see preview code
}

So, all this sounds nice but.... There are some gotchas. First, ReplayKit doesn't work in older devices, like the iPhone5 (iPhone 5S should be fine). The isAvailable field will return false, so you'll know. I have an iPhone 5 I use for development, so this is a bit of a bummer. Specially since the second problem is that ReplayKit doesn't seem to work in the Simulator either. I couldn't find any documentation on this, but that's what I read in some comments here and there. It's strange, because "isAvailable" returns true, but I don't get the "recording permissions" alert, and when I press stop, the callback of the stop function is never called. This sounds like this issue, but that issue happens in real devices as well. I got that on my iPad, but I think that was because the UI test, for some reason, clicks the "Do not allow" button, and then the screen recording seems to stop working in the following sessions. There's no alert being displayed, and the startRecording function does not return any errors, so it's apparently working, but the stop callback never happens. I solved this by rebooting the iPad.

So... the remaining problem is borrowing real devices. I have an iPad Pro 9.7'', the iPhone 5 that I can't use for this, and an iPhoneX. It's a bit of a pain finding real devices, but I'll come up with something 😅. For iPad, if even the UI slightly changes (font sizes, etc.) from device to device, I've simply recorded the 3 videos of the 3 language on my iPad and rescaled the videos. It's the same aspect ratio, so there aren't any strange deformations. Read next section.

Final video touches

No way I'll edit the videos manually again with iMovie. It may be prettier, but it's too much work. I think what it's important is that the user haves a feel of the real app, including the visuals for their language. Because my UI test generates a video that is 38 seconds-long, I've sped it up a little with ffmpeg. Also, ReplayKit creates a 60fps 1920x1440 video, and both the frame rate and the resolution are wrong for App Store Connect. So here are the conversions that I do from the command line,

# The UITest produces a 38~39 sec video. Speed it up to be ~29 secs
ffmpeg -i Silabitas2.0-ja.MP4 -filter_complex "[0:v]setpts=0.75*PTS[v];[0:a]atempo=1.33[a]" -map "[v]" -map "[a]" Silabitas2.0-ja-faster.mp4

# App Store complains about too high frame rates! Change to 30fps
# https://stackoverflow.com/a/26730600/1765629
ffmpeg -i Silabitas2.0-ja-faster.mp4 -r 30 Silabitas2.0-ja-30fps.mp4

# The capture produces a 1920x1440 video, but App Store Connect
# wants them to be 1600x1200 for 12.9'' and 10.5'' iPads
ffmpeg -i Silabitas2.0-ja-30fps.mp4 -vf scale=1600:1200 -c:a copy Silabitas2.0-ja-1600x1200.mp4
# 9.7'' iPad
ffmpeg -i Silabitas2.0-ja-30fps.mp4 -vf scale=1200:900 -c:a copy Silabitas2.0-ja-1200x900.mp4
You could change the script to loop for all languages. And probably you could write the amazing final transformation with just one call to ffmpeg, but ffmpeg is too confusing for me... When I combine parameters I always end up having a video out of sync, or other weird problems...

Once I find all necessary iPhones, I'll end up with 21 videos that I'll have to drag&drop to upload manually again. I've already uploaded the 9 iPad videos and they look fine 👍. Here's one of those videos that I've uploaded to youtube:

Summary

This was a long post, but it is because I've been struggling with this the whole week! So it felt indeed as a "quest"! I "wasted" many hours, and I couldn't get it all fully automated. There are also a few gotchas, but at least I can:
  • automatically capture all the screenshots for all devices and languages through UI tests and fastlane;
  • automatically capture a demo video for all languages in a given real device.
For the video capture, I still have to start the test manually in all the relevant devices, but that's max 7 times. Then I have to copy all the videos back from the device to my laptop, and process the files with ffmpeg.

I had already encountered the problem of not being able to use the Simulator in the past, with Palettist. That's because that app is Metal-only, and Metal is not supported in the simulators 😭. So I guess for Metal apps I'll have to plug one device one by one, and maybe set up fastlane to use one device at a time for the screenshots.

And for both the videos and the screenshots, I still have to use the App Store Connect web interface to drag&drop 21 sets of images, and 21 videos. But overall, this is much faster than what I did in the previous release of Silabitas. Oh, and please check the game!

silabitas.endavid.com

The latest update with 20 new stages should be out soon... As soon as I find iPhones to capture all the missing videos...


Display P3 vs sRGB in Color Palettes
Sun, 08 Apr 2018 17:44:37 +0100
In my previous blog post I talked about the Display P3 color space, a wide gamut RGB color space used by Apple. I tried to visualize the difference between displayP3 and sRGB colors using Self-Organizing Maps, but they were of no much use to see visualize the difference.

Since then, I've been working on an app I called Palettist to compute color palettes using Display P3. The app, still not out in the App Store, includes two interesting options to produce examples of displayP3-only images. First, you can select "displayP3 - sRGB" as the input color space, so only colors outside the sRGB gamut, but inside the displayP3 gamut, are used in the color palette. Then, you can select an "sRGB comparison shape" to render a shape inside each color bin where the color is clamped to its equivalent color in sRGB color space. This is great to evaluate different displays and Display P3 capabilities. Check the online manual for details.

Here's the first example of displayP3 only colors:

You could be seeing several things now:

  • If you are reading this on a browser like Safari that understands 64-bit PNG images with an embedded Display P3 color profile AND a display that can display that gamut, then you should see circles inside each color bin/square. The color of the squares are in displayP3, and the color of each circle is the color of its surrounding square, clamped to sRGB color space. You can see that sRGB colors are more dull. For some colors, the differences are very small. But there's definitely a circle inside each one.
  • If you don't have a displayP3 display, but your OS/browser is doing proper color management, then you shouldn't see any circle. You should see this,

    I created this image by converting the one above to sRGB using GIMP. The two images should look the same if you have an sRGB display.

  • If you don't have a displayP3 display, but you see circles, I suspect you may not have proper color management. What happens is that some programs ignore the color profile and interpret the RGB triplets in each pixel as if they were in sRGB color space. That means that the squares look like what the circles were supposed to look like, and the circles now look even duller. You would be seeing this,

    I created this image by embedding the wrong color profile, sRGB, to the first one. Everyone should be able to see circles in this image, but all these colors are now inside the sRGB color gamut.

Display P3 color palettes by color category

For reference, I've created more of these displayP3-only palettes, clustered by color category, so it's easier to appreciate the gamut differences depending on the color.

Light brown

Brown

Pink

Orange

Purple

Azure

Blue

Yellow

Green

Red

You can make a few observations from here:

  • there are no black, gray, or white colors;
  • there are only a few browns, and you can't barely tell the difference from their sRGB counterparts;
  • greens are the most numerous;
  • there are a lot more blues than I originally expected, but I think it's because of the greencomponent in cyan-like blues. Although the blue channel is also more intense than in sRGB.

If you inspect the difference of displayP3 and sRGB in Color Sync Utility (or check my previous blog post), I think the observations above are consistent with the shape of the volume difference.

Display P3 in Metal SDK

To close, just some programming notes. It took me a while to figure out how to render displayP3 colors in Metal. I wrote an extensive post in Stack Overflow. I'll summarize here the main points.

MTKViews have a colorSpace property in macOS, but not on iOS. I suspect the difference is because color management on iOS is targeted (see Best practices for color management). So how does it work, then?

The solution is simple. You can set an output surface in Metal with a texture in extended range pixel format. For instance, bgra10_xr_srgb. When you do this, if you output (1, 0, 0) in your shader, you will obtain the expected sRGB red color, but if you make its value greater than one, the color will be even redder (provided that your iPhone/iPad display supports it), and it will fall inside the Display P3 gamut.

So if your rendering pipeline assumes Display P3 linear color space all the way through, before you render your colors to the output texture, multiply by the 3x3 matrix that linearly converts from displayP3 to sRGB (you can find the matrix in my previous blogpost). DO NOT CLAMP OR SATURATE the output! Some values will be negative, and some will be greater than 1. And that's OK. Just make sure the pixel format of your output texture is one of the extended range ones. If it ends in "_srgb", it will also apply the gamma for you.

And that's all! Check the Stack overflow post for details.

Please let me know if this is useful to you. Also, please download Palettist and try generating some displayP3 palettes yourself 😊 Come back here or follow me on twitter to find out when the app gets published. Hopefully, somewhere around next week, if Apple doesn't find any issues. I'm a bit anxious because it's the first app I release using a Metal-only library, my VidEngine. Wish me luck.


Exploring the display-P3 color space
Sun, 11 Mar 2018 11:49:01 +0000

About display-P3

I recently found out that the iPhoneX supports a color space called display-P3, or DCI-P3. I read about it in Apple's Human Interface Guidelines, and I immediately started thinking of ways of generating examples to illustrate the difference between sRGB and P3, on a display that supports it. My Mac screen doesn't, but both my iPad and the iPhoneX do. I found a few interesting examples here: Wide Color Gamut examples.

Visualizing P3-sRGB

What I thought I could do is to generate samples that are in the P3 gamut, but out the sRGB gamut, i.e. the difference between P3 and sRGB, and perhaps generate a palette of P3-only colors, just to see how it looks. You can visualize the volume difference of the 2 color spaces easily with the ColorSync Utility on Mac. I've captured a couple of anim gifs, with both the XYZ and the L*a*b* axis. See below, where the bigger white volume is the P3 color space,

A simple approach to generate samples out of the sRGB gamut is by brute force: quantize the whole P3 color space in a few bins, and convert to sRGB without clamping. Colors with any channel with a value smaller than zero or greater than one will be out of gamut. If we start with a rough subdivision, e.g. 7 bits per color channel, or 128 bins per channel (128 * 128 * 128 ~= 2M voxels), we can then subdivide further each voxel (imagine an octree). Or we could compute the intersection planes and just look for points below, or above the plane (I tried that approach, but the code turned out to be too complicated; I decided to go for simplicity --but you can go back to that attempt if you explore the commits).

Computing the difference P3-sRGB

I've created a series of unit tests to test color conversions, so you can jump straight to the point. But I'll explain a bit about it here.

The first thing to read is about Color Management is OS X and iOS. In short, you can use use UIColor to easily create color instances in both sRGB and displayP3. But notice that those colors will have the gamma already applied to them. If you need linear values, or other color spaces like XYZ, you will need to use Core Graphics directly, the CGColor class.

The problem of those conversions, apart from how cumbersome is to do any simple operation with all those classes because of all the wrapping and unwrapping, is that the values are automatically clamped. I need unclamped values because I need to know if a value is outside the gamut. There might be a way to do that programmatically with those classes, but I couldn't find one. Anyway, I wanted to understand the color conversion in detail, so I implemented my own set of conversions, and I used the unit tests to compare with Apple's classes and the ColorSync Utility. All my conversions are in this file.

In BruceLindbloom.com you can find the formulas to convert from an RGB color space to XYZ, and viceversa. You will need a series of primaries, that you can also find in that page for sRGB. The sRGB color primaries adapted to a D50 white point are these,

public static let sRGB = RGBColorSpace(
        // primaries adapted to D50
        red: CiexyY(x: 0.648431, y: 0.330856, Y: 0.222491),
        green: CiexyY(x: 0.321152, y: 0.597871, Y: 0.716888),
        blue: CiexyY(x: 0.155886, y: 0.066044, Y: 0.060621),
        white: .D50)

Note that the white point is D50, not D65 as I wrongly assumed at the beginning... It took my a while to realize that I was wrong, until I started creating unit tests... Although I couldn't find any mention to D50 in the CGColor documentation, you can verify the white point of the sRGB color profile with the ColorSync Utility.

The problem was finding the primaries for displayP3. The DCI-P3 page in Wikipedia, and in this article, say these are the values (assuming Y=1),

public static let dciP3 = RGBColorSpace(
        red: CiexyY(x: 0.680, y: 0.320),
        green: CiexyY(x: 0.265, y: 0.690),
        blue: CiexyY(x: 0.150, y: 0.060),
        white: .D65)

If I use those, the RGB to XYZ conversion matrix (column-major) results in,

    0.486569  0.265673   0.198187
    0.228973  0.691752   0.0792749
    0.0       0.0451143  1.04379

But checking the primaries in the "Display P3.icc" profile results in,

    0.5151  0.292   0.1571
    0.2412  0.6922  0.0666
   -0.0011  0.0419  0.7841

So I went for those, representing the primaries directly in XYZ color space, instead of xyY.

Finally, if you multiply the matrices to convert from P3 to XYZ, and then from XYZ to sRGB, you obtain this matrix for direct conversion between linear displayP3 and linear sRGB,

    1.2249  -0.2247  0
   -0.0420   1.0419  0
   -0.0197  -0.0786  1.0979

In my code, that matrix can simply be obtained with,

let m = RGBColorSpace.sRGB.toRGB * RGBColorSpace.dciP3.toXYZ

The opposite, from sRGB to P3, is given by this matrix,

    0.8225  0.1774  0
    0.0332  0.9669  0
    0.0171  0.0724  0.9108

displayP3 size compared to sRGB

I created a sample app, SampleColorPalette to compute the P3 minus sRGB difference. As explained in the introduction, I use only 7 bits per channel, since that gives me already lots of samples. The count of samples out of the sRGB gamut is 625154, out of the 2 million values of the 7-bit color space. So approximately 29%. The Wikipedia says that P3 has a 25% larger color gamut than the sRGB, but by these accounts it looks as if it's 42% larger (2M/(2M-600K)).

According to BruceLindbloom.com, the Lab gamut efficiency of sRGB is only 35%, while Wide Gamut RGB is 77.6%. I guess displayP3 should be somewhere in-between. But I can't seem to reconcile the difference of my account and Wikipedia's.

displayP3 in Metal textures

I used 16-bit RGBA textures to do all the processing. I applied the gamma manually in the shader, extracted the bytes in the right order, and stored the image with the appropriate color space in a UIImage, that I later saved as a PNG. The interesting bits are in TextureInit.swift and Image.swift.

I was slightly confused by the Digital Color Meter in Mac, because I saved the image below and if I select "display P3", the values are not (255, 0, 0), but (237, 49, 19). I think this is because the image is in displayP3 (the embedded profile in the PNG file says so), but my monitor can't display that. So it must convert it to the profile of my screen, and the P3 value is the value of going back from the display profile to P3. If I select instead "Display native values", then I see (255, 0, 0), which must be the underlying value of the image before applying any color conversion. Perhaps someone out there reading this can clarify... 😅

The image above, by the way, has a label that says "sRGB" on the image to the right. You should be able to read it without problems if your display covers the displayP3 gamut.

Self-Organizing Maps

My first attempt to visualize the difference between P3 and sRGB has been to reimplement in Metal the Self-Organizing Maps for color palettes that I implemented some time ago in Javascript. You can find all the code in the same example in github. I posted a video in Instagram on how the SOM gets generated. Note that the gamma is wrong, and I'm not even sure how to display a displayP3 texture properly with Metal... So until it becomes a UIImage I can't tell for sure I'm seeing the right thing (any Metal experts out there? 😅).

Below you can see the final PNG images, exported from the UIImages that I obtained. The left one is in displayP3, and the right one is in sRGB, so supposedly many values should have been clamped and I expected it to have more banding in displayP3 displays than the one on the left (both images should look the same in a sRGB display),

Assuming you have a displayP3 display, can you tell the difference? I can't 😅

There are some subtle differences, but I think the main issue with using a SOM is that pixels between two extreme colors get interpolated, entering again the gamut that sRGB can represent. I think a better way to do this would be to first split the samples in color categories, so even if values get interpolated, hopefully they won't cross the XY triangle.

Another option I'm considering is to simply sort colors in some other way, probably also starting with color categories, so no interpolated color appears in the final image.

Stay tuned for another post, because the real fun starts now that I have the tools 😊


2017 Retrospective
Wed, 10 Jan 2018 22:17:47 +0000
For 2016 I tried to do some kind of retrospective: 2016 retrospective

I failed to do so before the end of 2017, but I'll try to write some notes now.

Achievements

Although in 2016 I managed to release 3 apps, this year I've been a bit more busy. I've only released,
  • ComplexFeelings, an iOS app/game to keep track of your feelings and needs.
I should also mention that I finally cleared Metal Gear Solid 5: The Phantom Pain, a game I worked on for 3 years, and that it took me an additional 1.5 years to complete...

Power up!

I think I've done lots of learning,
  • Improved my Swift skills, while writing ComplexFeelings and some other unreleased projects.
  • Learned a lot more about Metal. For most of the year I dedicated most of my free time to develop a VR engine entirely written in Metal and Swift. Can't share much of it, but I wrote another blog post on Weighted-Blended Order Independent Transparency
  • Apart from VR, I learned a bit of AR. I learned a bit about Vuforia using Unity, and I followed some ARKit tutorials.
  • Kept working in C#, C++, and Javascript at work.
  • Continued working as Engineer Manager in my crew at work. I also published a blog post about how I run our scrum retrospectives.
  • My reputation in Stackoverflow increased from 329 to 739. Again not much, but a small contribution is better than no contribution.

Power down...

A brief interlude with some slightly negative things,
  • I practiced piano, but just to the level I wouldn't forget how to play a couple of songs I memorized... But I haven't managed to play anything new, and my score-reading skills are close to 0 now... Same as last year...
  • I haven't read any book yet... I've read some programming books, but not sure that counts...
  • My Japanese is getting worse... I use it at home, but our conversations aren't technical... Also, I don't read as much as I used to do.
Again, there are too many things I want to do, and too little time...

Fun

Cat! We got a cute 8-month old cat to close the year. Her name is Poppy 😊

Trips!

  • Again, I travelled several times to Barcelona this year, and had really good time with my family ❤️
  • Visited Tokyo again, basically to met friends and ex-colleagues 😊
  • Short nice trips to Brighton, Girona, and Menorca
And my highlight of movies that I've watched this year, I have short reviews in Letterboxd

Music albums of the year,

  • "Final Fantasy XV" soundtrack. I haven't played the game yet, but I love the soundtrack by Yoko Shimomura
  • "Utopia" by Björk.

And that's been my 2017! I think it's been quite good, but again I failed to admin my free time and include time for doing NOTHING!


⏪ Previous year | Next year ⏩