Hello.

I am Paul Kinlan.

A Developer Advocate for Chrome and the Open Web at Google.

Registering as a Share Target with the Web Share Target API

Paul Kinlan

Pete LePage introduces the Web Share Target API and the the availability in Chrome via an origin trial

Until now, only native apps could register as a share target. The Web Share Target API allows installed web apps to register with the underlying OS as a share target to receive shared content from either the Web Share API or system events, like the OS-level share button.

Read full post.

This API is a game changer on the web, it opens the web up to something that was only once available to native apps: Native Sharing. Apps are silos, they suck in all data and make it hard to be accessible across platforms. Share Target starts to level the playing field so that the web can play in the same game.

The Twitter Mobile experience has Share Target already enabled. This post was created using the Share Target I have defined in my sites ‘admin panel’ manifest.json - it works pretty well, and the minute they land file support I will be able to post any image or blob on my device to my blog.

Very exciting times.

Read the linked post to learn more about the time-lines for when this API should go live and how to use the API.

Paul Kinlan

Trying to make the web and developers better.

RSS Github Medium

Why Build Progressive Web Apps: Push, but Don't be Pushy! Video Write-Up

Paul Kinlan

A great article and video and sample by Thomas Steiner on good push notifications on the web.

A particularly bad practice is to pop up the permission dialog on page load, without any context at all. Several high traffic sites have been caught doing this. To subscribe people to push notifications, you use the the PushManager interface. Now to be fair, this does not allow the developer to specify the context or the to-be-expected frequency of notifications. So where does this leave us?

Read full post.

Web Push is an amazingly powerful API, but it’s easy to abuse and annoy your users. The bad thing for your site is that if a user blocks notifications because you prompt without warning, then you don’t get the chance to ask again.

Treat your users with respect, Context is king for Web Push notifications.

Maybe Our Documentation "Best Practices" Aren''t Really Best Practices

Paul Kinlan

Kayce Basques, an awesome tech writer on our team wrote up a pretty amazing article about his experiences measuring how well existing documentation best-practices work for explaining technical material. Best practices in this sense can be well-known industry standards for technical writing, or it could be your own companies writing style guide. Check it out!

Recently I discovered that a supposed documentation “best practice” may not actually stand up to scrutiny when measured in the wild. I’m now on a mission to get a “was this page helpful?” feedback widget on every documentation page on the web. It’s not the end-all be-all solution, but it’s a start towards a more rigorous understanding of what actually makes our docs more helpful.

Read full post.

Whilst I am not a tech writer, my role involves a huge amount of engagement with our tech writing team as well as also publishing a lot of ‘best practices’ for developers myself. I was amazed by how much depth and research Kayce has done on the art of writing modern docs through the lens of our teams content. I fully encourage you to read Kayce’s article in-depth - I learnt a lot. Thank you Kayce!

Feature Policy & the Well-Lit Path for Web Development (Chrome Dev Summit 2018)

Paul Kinlan

Jason did an amazing talk about a little-known but new area of the web platform ‘Feature Policy’.

Feature Policy is a new primitive which allows developers to selectively enable, disable, and modify the behaviour of certain APIs and features in the browser. It’s like CSP, but for features & APIs! Teams can use new tools like Feature Policy and the Reporting API to catch errors before they grow out of control, ensure site performance stays high, keep code quality healthy, and help avoid the web’s biggest footguns.

Check out featurepolicy.rocks for more information about Feature Policy, code samples, and live demos.

Submit new ideas for policies or feedback on existing policies at → https://bit.ly/2B3gDEU.

To learn more about the Reporting API see https://bit.ly/rep-api.

Read full post.

Feature policy is an interesting area that can seem like a hard place to work out where.

There are a couple of areas where I seeing it being beneficial:

  1. Control 3rd-party content. As an embedder, you should be able to control what functionality runs in the context of your page and when it runs. Feature policy gives you that control. Don’t want iframes to autoplay video? Turn it off. Don’t want third party iframes to request geo-location? Turn it off. Don’t want iframes to access sensor information? Turn it off. You should be in control of your experience, not third party sites.
  2. Stay on target in development. We talked a lot at Chrome DevSummit about perf-budgets, yet today they can be hard to reason with. Feature Policy enabled on your development and staging services will help you know if any sets of changes you are making will breach your performance budgets by stopping you from doing the wrong thing. A case in point, our very own Chrome Dev Summit side had feature policy enabled for images called ‘max-downscaling-image’ - it inverts the colour of the image when it has been downscaled too much (a large image displayed in a small container). Feature policy picked it up and enabled us to make a decision about what to do. In the end, we disabled the policy because we were using the larger version of the image in multiple places and the images were already cached at that point.

I do encourage you all to look in to feature policy a lot more because it will play an important part of the future of the web. If you want to see the latest Policies that Chrome is implementing then checkout Feature Policy on Chrome Status

Photos from Chrome Chrome Dev Summit 2018

Paul Kinlan

Some awesome photos from this years Chrome Dev Summit

I love this event ;)

Read full post.

I am like Wally - if you can find me, you get a sticker.

Chrome Dev Summit 2018

Paul Kinlan

I am so excited! Tomorrow is the 6th Chrome Dev Summit and it’s all coming together.

Join us at the 6th Chrome Dev Summit to engage with Chrome engineers and leading web developers for a two-day exploration of modern web experiences.

We’ll be diving deep into what it means to build a fast, high quality web experience using modern web technologies and best practices, as well as looking at the new and exciting capabilities coming to the web platform.

Read full post.

I’m currently in the rehearsals and the tech check on the day before the event and it’s looking pretty good :) the talks are done, the MC’s are MC’ing, and the jokes are terrible :)

We’ve split the event into two distinct groups: Web of Today (Day 1); and thoughts on the Web of Tomorrow (Day 2).

The thinking I had behind this was that we want web developers to come away with a clear understanding of the what we (Chrome and Google) think is a good snapshot of modern web development and what we think the most important focuses businesses and developers should focus on for their users - Speed, UI, and Capability; and most importantly how to meet those objectives based on our learnings from working with a lot of developers over the last year.

The second day focus is interesting and it’s something new that we are trying this year. The intent of the day is to be a lot clearer about what we are starting to work on over the year ahead and where we need your feedback and help to make sure that we are building things that developers need - there should be a lot of deep dives in to new technologies that are being designed to fix common problems we all have building modern web experiences and a lot of opportunity to give our teams feedback so that we can help build a better web with the ecosystem.

I’m really looking forward to seeing everyone over the next two days.

Creating a simple boomerang effect video in javascript

Paul Kinlan

Simple steps to create an instagram-like Video Boomerang effect on the web

Read More

Grep your git commit log

Paul Kinlan

Finding code that was changed in a commit

Read More

Performance and Resilience: Stress-Testing Third Parties by CSS Wizardry

Paul Kinlan

I was in China a couple of weeks ago for the Google Developer Day and I was showing everyone my QRCode scanner, it was working great until I went offline. When the user was offline (or partially connected) the camera wouldn’t start, which meant that you couldn’t snap QR codes. It took me an age to work out what was happening, and it turns out I was mistakenly starting the camera in my onload event and the Google Analytics request would hang and not resolve in a timely manner. It was this commit that fixed it.

Because these types of assets block rendering, the browser will not paint anything to the screen until they have been downloaded (and executed/parsed). If the service that provides the file is offline, then that’s a lot of time that the browser has to spend trying to access the file, and during that period the user is left potentially looking at a blank screen. After a certain period has elapsed, the browser will eventually timeout and display the page without the asset(s) in question. How long is that certain period of time?

It’s 1 minute and 20 seconds.

If you have any render-blocking, critical, third party assets hosted on an external domain, you run the risk of showing users a blank page for 1.3 minutes.

Below, you’ll see the DOMContentLoaded and Load events on a site that has a render-blocking script hosted elsewhere. The browser was completely held up for 78 seconds, showing nothing at all until it ended up timing out.

Read full post.

I encourage you to read the post because there is a lot of great insight.

Chrome Bug 897727 - MediaRecorder using Canvas.captureStream() fails for large canvas elements on Android

Paul Kinlan

At the weekend I was playing around with a Boomerang effect video encoder, you can kinda get it working in near real-time (I’ll explain later). I got it working on Chrome on Desktop, but it would never work properly on Chrome on Android. See the code here.

It looks like when you use captureStream() on a <canvas> that has a relatively large resolution (1280x720 in my case) the MediaRecorder API won’t be able to encode the videos and it won’t error and you can’t detect that it can’t encode the video ahead of time.

(1) Capture a large res video (from getUM 1280x720) to a buffer for later processing. (2) Create a MediaRecorder with a stream from a canvas element (via captureStream) sized to 1280x720 (3) For each frame captured putImageData on the canvas (4) For each frame call canvasTrack.requestFrame() at 60fps

context.putImageData(frame, 0, 0); canvasStreamTrack.requestFrame();

Demo: https://boomerang-video-chrome-on-android-bug.glitch.me/ Code: https://glitch.com/edit/#!/boomerang-video-chrome-on-android-bug?path=script.js:21:42

What is the expected result?

For the exact demo, I buffer the frames and then reverse them so you would see the video play forwards and backwards (it works on desktop). In generall I would expect all frames sent to the canvas to be processed by the MediaRecorder API - yet they are not.

What happens instead?

It only captures the stream from the canvas for a partial part of the video and then stops. It’s not predicatable where it will stop.

I suspect there is a limit with the MediaRecorder API and what resolution it can encode depending on the device, and there is no way to know about these limits ahead of time.

As far as I can tell this has never worked on Android. If you use https://boomerang-video-chrome-on-android-bug.glitch.me which has a 640x480 video frame it records just fine. The demo works at higher-resolution just fine on desktop.

Read full post.

If you want to play around with the demo that works on both then click here

Why Microsoft and Google love progressive web apps | Computerworld

Paul Kinlan

A nice post about PWA from Mike Elgan. I am not sure about Microsoft’s goal with PWA, but I think our’s is pretty simple: we want users to have access to content and functionality instantly and in a way they expect to be able to interact with it on their devices. The web should reach everyone across every connected device and a user should be able to access in their preferred modality, as an app if that’s how they expect it (mobile, maybe), or voice on an assistant etc.

We’re still a long way from the headless web, however, one thing really struck me in the article:

Another downside is that PWAs are highly isolated. So it’s hard and unlikely for different PWAs to share resources or data directly.

Read full post.

Sites and apps on the web are not supposed to be isolated, the web is linkable, indexable, ephemeral, but we are getting more siloed with each site we build. We are creating unintended silos because the platform doesn’t easily allow users to get their data in and out off sites easily. I’m not talking about RDF or anything like that, basic operations such as copy and paste, drag and drop, share to site and share from site are broken on the web of today, and that’s before we get to IPC between frames, workers and windows.

Building a video editor on the web. Part 0.1 - Screencast

Paul Kinlan

You should be able to create and edit videos using just the web in the browser. It should be possible to provide a user-interface akin to Screenflow that lets you create an output video that combines multiple videos, images, and audio into one video that can be uploaded to services like YouTube. Following on from the my previous post that briefly describes the requirements of the video editor, in this post I just wanted to quickly show in a screencast how I built the web cam recorder, and also how how to build a screencast recorder :)

Read More

894556 - Multiple video tracks in a MediaStream are not reflected on the videoTracks object on the video element

Paul Kinlan

The first issue I have found trying to build a video editor on the web.

I have multiple video streams (desktop and web cam) and I wanted to be able to toggle between the video streams on one video element so that I can quickly switch between the web cam and the desktop and not break the MediaRecorder.

It looks like you should be able to do it via toggling the selected property on the videoTracks object on the <video> element, but you can’t, the array of tracks contains only 1 element (the first video track on the MediaStream).

What steps will reproduce the problem? (1) Get two MediaStreams with video tracks (2) Add them to a new MediaStream and attach as srcObject on a videoElement (3) Check the videoElement.videoTracks object and see there is only one track

Demo at https://multiple-tracks-bug.glitch.me/

What is the expected result? I would expect videoElement.videoTracks to have two elements.

What happens instead? It only has the first videoTrack that was added to the MediaStream.

Read full post.

Repro case.

window.onload = () => {
  if('getDisplayMedia' in navigator) warning.style.display = 'none';

  let blobs;
  let blob;
  let rec;
  let stream;
  let webcamStream;
  let desktopStream;

  captureBtn.onclick = async () => {

       
    desktopStream = await navigator.getDisplayMedia({video:true});
    webcamStream = await navigator.mediaDevices.getUserMedia({video: { height: 1080, width: 1920 }, audio: true});
    
    // Always 
    let tracks = [...desktopStream.getTracks(), ... webcamStream.getTracks()]
    console.log('Tracks to add to stream', tracks);
    stream = new MediaStream(tracks);
    
    console.log('Tracks on stream', stream.getTracks());
    
    videoElement.srcObject = stream;
    
    console.log('Tracks on video element that has stream', videoElement.videoTracks)
    
    // I would expect the length to be 2 and not 1
  };

};

Building a video editor on the web. Part 0.

Paul Kinlan

You should be able to create and edit videos using just the web in the browser. It should be possible to provide a user-interface akin to Screenflow that lets you create an output video that combines multiple videos, images, and audio into one video that can be uploaded to services like YouTube. This post is really just a statement of intent. I am going to start the long process of working out what is and isn’t available on the platform and seeing how far we can get today.

Read More

Barcode detection in a Web Worker using Comlink

Paul Kinlan

I’m a big fan of QRCodes, they are very simple and neat way to exchange data between the real world and the digital world. For a few years now I’ve had a little side project called QRSnapper — well it’s had a few names, but this is the one I’ve settled on — that uses the getUserMedia API to take live data from the user’s camera so that it can scan for QR Codes in near real time.

The goal of the app was to maintain 60fps in the UI and near instant detection of the QR Code, this meant that I had to put the detection code in to a Web Worker (pretty standard stuff). In this post I just wanted to quickly share how I used comlink to massively simplify the logic in the Worker.

qrclient.js

import * as Comlink from './comlink.js';

const proxy = Comlink.proxy(new Worker('/scripts/qrworker.js')); 

export const decode = async function (context) {
  try {
    let canvas = context.canvas;
    let width = canvas.width;
    let height = canvas.height;
    let imageData = context.getImageData(0, 0, width, height);
    return await proxy.detectUrl(width, height, imageData);
  } catch (err) {
    console.log(err);
  }
};

qrworker.js (web worker)

import * as Comlink from './comlink.js';
import {qrcode} from './qrcode.js';

// Use the native API's
let nativeDetector = async (width, height, imageData) => {
  try {
    let barcodeDetector = new BarcodeDetector();
    let barcodes = await barcodeDetector.detect(imageData);
    // return the first barcode.
    if (barcodes.length > 0) {
      return barcodes[0].rawValue;
    }
  } catch(err) {
    detector = workerDetector;
  }
};

// Use the polyfil
let workerDetector = async (width, height, imageData) => {
  try {
    return qrcode.decode(width, height, imageData);
  } catch (err) {
    // the library throws an excpetion when there are no qrcodes.
    return;
  }
}

let detectUrl = async (width, height, imageData) => {
  return detector(width, height, imageData);
};

let detector = ('BarcodeDetector' in self) ? nativeDetector : workerDetector;
// Expose the API to the client pages.
Comlink.expose({detectUrl}, self);

I really love Comlink, I think it is a game changer of a library especially when it comes to creating idiomatic JavaScript that works across threads. Finally a neat thing here, is that the native Barcode detection API can be run inside a worker so all the logic is encapsulated away from the UI.

Read full post.

Running FFMPEG with WASM in a Web Worker

Paul Kinlan

I love FFMPEG.js, it’s a neat tool that is compiled with asm.js`and it let’s me build JS web apps that can quickly edit videos. FFMPEG.js also works with web workers so that you can encode videos without blocking the main thread.

I also love Comlink. Comlink let’s me easily interact with web workers by exposing functions and classes without having to deal with a complex postMessage state machine.

I recently got to combine the two together. I was experimenting getting FFMPEG exported to Web Assembly (it works - yay) and I wanted to clean up all of the postMessage work in the current FFMPEG.js project. Below is what the code now looks like - I think it’s pretty neat. We have one worker that imports ffmpeg.js and comlink and it simply exposes the ffmpeg interface, and then we have the webpage that loads the worker and then uses comlink to create a proxy to the ffmpeg API.

Neat.

worker.js

importScripts('https://cdn.jsdelivr.net/npm/comlinkjs@3.0.2/umd/comlink.js');
importScripts('../ffmpeg-webm.js'); 
Comlink.expose(ffmpegjs, self);

client.html

let ffmpegjs = await Comlink.proxy(worker);
let result = await ffmpegjs({
   arguments: ['-y','-i', file.name, 'output.webm'],
   MEMFS: [{name: file.name, data: data}],
   stdin: Comlink.proxyValue(() => {}),
   onfilesready: Comlink.proxyValue((e) => {
     let data = e.MEMFS[0].data;
     output.src = URL.createObjectURL(new Blob([data]))
     console.log('ready', e)
   }),
   print: Comlink.proxyValue(function(data) { console.log(data); stdout += data + "\n"; }),
   printErr: Comlink.proxyValue(function(data) { console.log('error', data); stderr += data + "\n"; }),
   postRun: Comlink.proxyValue(function(result) { console.log('DONE', result); }),
   onExit: Comlink.proxyValue(function(code) {
     console.log("Process exited with code " + code);
     console.log(stdout);
   }),
});

I really like how Comlink, Workers and WASM compiled modules can play together. I get idiomatic JavaScript that interacts with the WASM module directly and it runs off the main thread.

Read full post.

Translating a blog using Google Cloud Translate and Hugo

Paul Kinlan

I recently returned from a trip to India to attend the Google4India event (report soon) and to meet with a lot of businesses and developers. One of the most interesting changes discussed was the push for more content in the language of the users in the country, and it was particularly apparent across all of Google’s products which ranged from making it easier to search in the users language, to find content, and also to read it back to users in either text or voice form.

The entire trip got me thinking. My blog is built with Hugo. Hugo now supports content in written in multiple languages. Hugo is entirely static, so creating new content is matter of just making a new file and letting the build system do it’s magic. So maybe I can build something that will make my content more available to more people by running my static content through a translation tool because human translation of content is very expensive.

A couple of hours before my flight back to the UK I created a little script that will take my markdown files and run them through the Google Cloud Translate to create a quick translation of the page that I can then quickly host. The entire solution is presented below. It’s a relatively basic processor, it ignores the Hugo preamble it ignores ‘code’ and it ignores pull quotes - my assumption was to that these are always meant to be left as the way they were written.

Note: It looks like our learning software for translations uses so it’s important to mark up your page so the learning tools don’t use Google Translated content as input to it’s algorithms.

// Imports the Google Cloud client library
const Translate = require('@google-cloud/translate');
const program = require('commander');
const fs = require('fs');
const path = require('path');

program
  .version('0.1.0')
  .option('-s, --source [path]', 'Add in the source file.')
  .option('-t, --target [lang]', 'Add target language.')
  .parse(process.argv);

// Creates a client
const translate = new Translate({
  projectId: 'html5rocks-hrd'
});

const options = {
  to:  program.target,
};

async function translateLines(text) {
  if(text === ' ') return ' ';
  const output = [];
  let results = await translate.translate(text, options);

  let translations = results[0];
  translations = Array.isArray(translations)
    ? translations
    : [translations];

  translations.forEach((translation, i) => {
    output.push(translation)
  });

  return output.join('\n');
};

// Translates the text into the target language. "text" can be a string for
// translating a single piece of text, or an array of strings for translating
// multiple texts.
(async function (filePath, target) {

  const text = fs.readFileSync(filePath, 'utf8');

  const lines = text.split('\n');
  let translateBlock = [];
  const output = [];

  let inHeader = false;
  let inCode = false;
  let inQuote = false;
  for (const line of lines) {
    // Don't translate preampble
    if (line.startsWith('---') && inHeader) { inHeader = false; output.push(line); continue; }
    if (line.startsWith('---')) { inHeader = true; output.push(line); continue; }
    if (inHeader) { output.push(line); continue; }

    // Don't translate code
    if (line.startsWith('```') && inCode) { inCode = false; output.push(line); continue; }
    if (line.startsWith('```')) { inCode = true; output.push(await translateLines(translateBlock.join(' '))); translateBlock = []; output.push(line); continue; }
    if (inCode) { output.push(line); continue; }

    // Dont translate quotes
    if (inQuote && line.startsWith('>') === false) { inQuote = false; }
    if (line.startsWith('>')) { inQuote = true; output.push(await translateLines(translateBlock.join(' '))); translateBlock = []; output.push(line); }
    if (inQuote) { output.push(line); continue; }

    if (line.charAt(0) === '\n' || line.length === 0) { output.push(await translateLines(translateBlock.join(' '))); output.push(line); translateBlock = []; continue;} 

    translateBlock.push(line);
  }

  if(translateBlock.length > 0) output.push(await translateLines(translateBlock.join(' ')))

  const result = output.join('\n');
  const newFileName = path.parse(filePath);
  fs.writeFileSync(`content/${newFileName.name}.${target}${newFileName.ext}`, result);

})(program.source, program.target);

Overall, I am very happy with the process. I understand that the machine translation is not perfect but my thinking is that I can increase the reach of my content to people who might be searching in their own languages and not in English I can increase the discovery surface area of my content and hopefully help more people.

It will take a while to see if this actually helps people, so I will report back when I have more data…. Now to run my script across more of my site :)

Apple - Web apps - All Categories

Paul Kinlan

Remember when Web Apps were a recommended way to use apps on the iPhone?

What are web apps? Learn what they are and how to use them.

Read full post.

In about 2013 Apple started to redirect the /webapps/ top-level directory to /iphone/

The thing is, the directory was actually pretty good, a lot of the apps in there still work today. However looking at the AppStore it solved a lot more problems that developers had: Better discovery and search specifically because the AppStore was directly on the device. The AppStore was also starting to introduce that removed friction from users and developers specifically with regards to payments.

Gears API

Paul Kinlan

I’m writing up a blog post about the early Mobile Web API’s and Alex Russell reminded me of Google Gears

Gears modules include:

  • LocalServer Cache and serve application resources (HTML, JavaScript, images, etc.) locally
  • Database Store data locally in a fully-searchable relational database
  • WorkerPool Make your web applications more responsive by performing resource-intensive operations asynchronously

Read full post.

I think it is interesting to see that AppCache and WebSQL, Geolocation and WebWorkers came out of the ideas in Google Gears and it’s only the latter two that really survived. WebSQL was never broadly supported, and was replaced by IndexedDB; and AppCache replaced by ServiceWorker

RSS Feed to Google Chat Webhook using Cloud Functions for Firebase and Superfeedr

Paul Kinlan

We use Google Chat internally a lot to communicate across our team - it’s kinda like our slack; We also create a lot of content that is accessible via RSS feeds, we even have a team feed that you can all view. It wasn’t until recently that I found out that it was pretty easy to create a simple post-only bot via WebHooks and that gave me the idea, I can create a simple service that polls RSS feeds and then sends them to our webhook that can post directly in to our team chat.

It was pretty simple in the end, and I’ve included all the code below. I used Firebase functions - I suspect that this is just as easy on other Function-as-a-service sites - and Superfeedr. Superfeedr is a service that can listen to Pubsubhubbub pings (now WebSub) and it will also poll RSS feeds that don’t have Pubsub set up. Then when it finds a feed it will ping a configured URL (in my case my Cloud Function in Firebase) with an XML or JSON representation of the newly found feed data - all you have to do is parse the data and do something with it.

const functions = require('firebase-functions');
const express = require('express');
const cors = require('cors');
const fetch = require('node-fetch');
const app = express();

// Automatically allow cross-origin requests
app.use(cors({ origin: true }));

app.post('/', (req, res) => {
  const { webhook_url } = req.query;
  const { body } = req;
  if (body.items === undefined || body.items.length === 0) {
    res.send('');
    return;
  }

  const item = body.items[0];
  const actor = (item.actor && item.actor.displayName) ? item.actor.displayName : body.title;

  fetch(webhook_url, {
    method: 'POST',
    headers: {
      "Content-Type": "application/json; charset=utf-8",
    },
    body: JSON.stringify({
      "text": `*${actor}* published <${item.permalinkUrl}|${item.title}>. Please consider <https://twitter.com/intent/tweet?url=${encodeURIComponent(body.items[0].permalinkUrl)}&text=${encodeURIComponent(body.items[0].title)}|Sharing it>.`
    })  
  }).then(() => {
    return res.send('ok');
  }).catch(() => {
    return res.send('error')
  });
})
// Expose Express API as a single Cloud Function:
exports.publish = functions.https.onRequest(app);

Read full post.

I was surprised and delighted about how easy it was to set up.