Hello.

I am Paul Kinlan.

I lead the Chrome and the Open Web Developer Relations team at Google.

Chrome Dev Summit 2018๐Ÿ”—

Reading time: 2 minutes

I am so excited! Tomorrow is the 6th Chrome Dev Summit and it's all coming together.

Join us at the 6th Chrome Dev Summit to engage with Chrome engineers and leading web developers for a two-day exploration of modern web experiences.

We'll be diving deep into what it means to build a fast, high quality web experience using modern web technologies and best practices, as well as looking at the new and exciting capabilities coming to the web platform.

Read full post.

I'm currently in the rehearsals and the tech check on the day before the event and it's looking pretty good :) the talks are done, the MC's are MC'ing, and the jokes are terrible :)

We've split the event into two distinct groups: Web of Today (Day 1); and thoughts on the Web of Tomorrow (Day 2).

The thinking I had behind this was that we want web developers to come away with a clear understanding of the what we (Chrome and Google) think is a good snapshot of modern web development and what we think the most important focuses businesses and developers should focus on for their users - Speed, UI, and Capability; and most importantly how to meet those objectives based on our learnings from working with a lot of developers over the last year.

The second day focus is interesting and it's something new that we are trying this year. The intent of the day is to be a lot clearer about what we are starting to work on over the year ahead and where we need your feedback and help to make sure that we are building things that developers need - there should be a lot of deep dives in to new technologies that are being designed to fix common problems we all have building modern web experiences and a lot of opportunity to give our teams feedback so that we can help build a better web with the ecosystem.

I'm really looking forward to seeing everyone over the next two days.

I lead the Chrome Developer Relations team at Google.

We want people to have the best experience possible on the web without having to install a native app or produce content in a walled garden.

Our team tries to make it easier for developers to build on the web by supporting every Chrome release, creating great content to support developers on web.dev, contributing to MDN, helping to improve browser compatibility, and some of the best developer tools like Lighthouse, Workbox, Squoosh to name just a few.

I love to learn about what you are building, and how I can help with Chrome or Web development in general, so if you want to chat with me directly, please feel free to book a consultation.

I'm trialing a newsletter, you can subscribe below (thank you!)

Creating a simple boomerang effect video in javascript

Reading time: 5 minutes

Simple steps to create an instagram-like Video Boomerang effect on the web Read More

Grep your git commit log

Reading time: 1 minute

Finding code that was changed in a commit Read More

Performance and Resilience: Stress-Testing Third Parties by CSS Wizardry๐Ÿ”—

Reading time: 2 minutes

I was in China a couple of weeks ago for the Google Developer Day and I was showing everyone my QRCode scanner, it was working great until I went offline. When the user was offline (or partially connected) the camera wouldn't start, which meant that you couldn't snap QR codes. It took me an age to work out what was happening, and it turns out I was mistakenly starting the camera in my onload event and the Google Analytics request would hang and not resolve in a timely manner. It was this commit that fixed it.

Because these types of assets block rendering, the browser will not paint anything to the screen until they have been downloaded (and executed/parsed). If the service that provides the file is offline, then thatโ€™s a lot of time that the browser has to spend trying to access the file, and during that period the user is left potentially looking at a blank screen. After a certain period has elapsed, the browser will eventually timeout and display the page without the asset(s) in question. How long is that certain period of time?

Itโ€™s 1 minute and 20 seconds.

If you have any render-blocking, critical, third party assets hosted on an external domain, you run the risk of showing users a blank page for 1.3 minutes.

Below, youโ€™ll see the DOMContentLoaded and Load events on a site that has a render-blocking script hosted elsewhere. The browser was completely held up for 78 seconds, showing nothing at all until it ended up timing out.

Read full post.

I encourage you to read the post because there is a lot of great insight.

Chrome Bug 897727 - MediaRecorder using Canvas.captureStream() fails for large canvas elements on Android๐Ÿ”—

Reading time: 2 minutes

At the weekend I was playing around with a Boomerang effect video encoder, you can kinda get it working in near real-time (I'll explain later). I got it working on Chrome on Desktop, but it would never work properly on Chrome on Android. See the code here.

It looks like when you use captureStream() on a <canvas> that has a relatively large resolution (1280x720 in my case) the MediaRecorder API won't be able to encode the videos and it won't error and you can't detect that it can't encode the video ahead of time.

(1) Capture a large res video (from getUM 1280x720) to a buffer for later processing. (2) Create a MediaRecorder with a stream from a canvas element (via captureStream) sized to 1280x720 (3) For each frame captured putImageData on the canvas (4) For each frame call canvasTrack.requestFrame() at 60fps

context.putImageData(frame, 0, 0); canvasStreamTrack.requestFrame();

Demo: https://boomerang-video-chrome-on-android-bug.glitch.me/ Code: https://glitch.com/edit/#!/boomerang-video-chrome-on-android-bug?path=script.js:21:42

What is the expected result?

For the exact demo, I buffer the frames and then reverse them so you would see the video play forwards and backwards (it works on desktop). In generall I would expect all frames sent to the canvas to be processed by the MediaRecorder API - yet they are not.

What happens instead?

It only captures the stream from the canvas for a partial part of the video and then stops. It's not predicatable where it will stop.

I suspect there is a limit with the MediaRecorder API and what resolution it can encode depending on the device, and there is no way to know about these limits ahead of time.

As far as I can tell this has never worked on Android. If you use https://boomerang-video-chrome-on-android-bug.glitch.me which has a 640x480 video frame it records just fine. The demo works at higher-resolution just fine on desktop.

Read full post.

If you want to play around with the demo that works on both then click here

Why Microsoft and Google love progressive web apps | Computerworld๐Ÿ”—

Reading time: 2 minutes

A nice post about PWA from Mike Elgan. I am not sure about Microsoft's goal with PWA, but I think our's is pretty simple: we want users to have access to content and functionality instantly and in a way they expect to be able to interact with it on their devices. The web should reach everyone across every connected device and a user should be able to access in their preferred modality, as an app if that's how they expect it (mobile, maybe), or voice on an assistant etc.

We're still a long way from the headless web, however, one thing really struck me in the article:

Another downside is that PWAs are highly isolated. So itโ€™s hard and unlikely for different PWAs to share resources or data directly.

Read full post.

Sites and apps on the web are not supposed to be isolated, the web is linkable, indexable, ephemeral, but we are getting more siloed with each site we build. We are creating unintended silos because the platform doesn't easily allow users to get their data in and out off sites easily. I'm not talking about RDF or anything like that, basic operations such as copy and paste, drag and drop, share to site and share from site are broken on the web of today, and that's before we get to IPC between frames, workers and windows.

Building a video editor on the web. Part 0.1 - Screencast

Reading time: 2 minutes

In this post, I'm sharing a screencast demonstrating how I built a web-based webcam and screen recorder using the navigator.getDisplayMedia API. This allows users to grant access to their screen content for recording. The code provided captures both screen and audio, combines them into a single video stream, and allows downloading the recorded video as a webm file. This is a very early stage, and the current output is raw. The ultimate goal is to build a full-fledged video editor in the browser, but for now, this screencast shows the initial steps in capturing video and audio. Read More

894556 - Multiple video tracks in a MediaStream are not reflected on the videoTracks object on the video element๐Ÿ”—

Reading time: 2 minutes

The first issue I have found trying to build a video editor on the web.

I have multiple video streams (desktop and web cam) and I wanted to be able to toggle between the video streams on one video element so that I can quickly switch between the web cam and the desktop and not break the MediaRecorder.

It looks like you should be able to do it via toggling the selected property on the videoTracks object on the <video> element, but you can't, the array of tracks contains only 1 element (the first video track on the MediaStream).

What steps will reproduce the problem? (1) Get two MediaStreams with video tracks (2) Add them to a new MediaStream and attach as srcObject on a videoElement (3) Check the videoElement.videoTracks object and see there is only one track

Demo at https://multiple-tracks-bug.glitch.me/

What is the expected result? I would expect videoElement.videoTracks to have two elements.

What happens instead? It only has the first videoTrack that was added to the MediaStream.

Read full post.

Repro case.

window.onload = () => {
  if('getDisplayMedia' in navigator) warning.style.display = 'none';

  let blobs;
  let blob;
  let rec;
  let stream;
  let webcamStream;
  let desktopStream;

  captureBtn.onclick = async () => {

       
    desktopStream = await navigator.getDisplayMedia({video:true});
    webcamStream = await navigator.mediaDevices.getUserMedia({video: { height: 1080, width: 1920 }, audio: true});
    
    // Always 
    let tracks = [...desktopStream.getTracks(), ... webcamStream.getTracks()]
    console.log('Tracks to add to stream', tracks);
    stream = new MediaStream(tracks);
    
    console.log('Tracks on stream', stream.getTracks());
    
    videoElement.srcObject = stream;
    
    console.log('Tracks on video element that has stream', videoElement.videoTracks)
    
    // I would expect the length to be 2 and not 1
  };

};

Building a video editor on the web. Part 0.

Reading time: 3 minutes

I'm embarking on a project to build a web-based video editor! The goal is to create a tool that simplifies video creation and editing entirely within the browser. Think Screenflow, but accessible to everyone directly on the web. This project is driven by my own needs for creating device demos, screencasts, and other videos. I've already made some progress (check out the demo!), but there's a lot more to do. I'll be exploring existing web technologies to record audio/video, manipulate content (watermarks, filters, overlays), and output in various formats. This isn't about building a massive commercial product, but rather about understanding what's possible and empowering others to create great videos using the open web. Read More

Barcode detection in a Web Worker using Comlink ๐Ÿ”—

Reading time: 2 minutes

I'm a big fan of QRCodes, they are very simple and neat way to exchange data between the real world and the digital world. For a few years now I've had a little side project called QRSnapper โ€” well it's had a few names, but this is the one I've settled on โ€” that uses the getUserMedia API to take live data from the user's camera so that it can scan for QR Codes in near real time.

The goal of the app was to maintain 60fps in the UI and near instant detection of the QR Code, this meant that I had to put the detection code in to a Web Worker (pretty standard stuff). In this post I just wanted to quickly share how I used comlink to massively simplify the logic in the Worker.

qrclient.js

import * as Comlink from './comlink.js';

const proxy = Comlink.proxy(new Worker('/scripts/qrworker.js')); 

export const decode = async function (context) {
  try {
    let canvas = context.canvas;
    let width = canvas.width;
    let height = canvas.height;
    let imageData = context.getImageData(0, 0, width, height);
    return await proxy.detectUrl(width, height, imageData);
  } catch (err) {
    console.log(err);
  }
};

qrworker.js (web worker)

import * as Comlink from './comlink.js';
import {qrcode} from './qrcode.js';

// Use the native API's
let nativeDetector = async (width, height, imageData) => {
  try {
    let barcodeDetector = new BarcodeDetector();
    let barcodes = await barcodeDetector.detect(imageData);
    // return the first barcode.
    if (barcodes.length > 0) {
      return barcodes[0].rawValue;
    }
  } catch(err) {
    detector = workerDetector;
  }
};

// Use the polyfil
let workerDetector = async (width, height, imageData) => {
  try {
    return qrcode.decode(width, height, imageData);
  } catch (err) {
    // the library throws an excpetion when there are no qrcodes.
    return;
  }
}

let detectUrl = async (width, height, imageData) => {
  return detector(width, height, imageData);
};

let detector = ('BarcodeDetector' in self) ? nativeDetector : workerDetector;
// Expose the API to the client pages.
Comlink.expose({detectUrl}, self);

I really love Comlink, I think it is a game changer of a library especially when it comes to creating idiomatic JavaScript that works across threads. Finally a neat thing here, is that the native Barcode detection API can be run inside a worker so all the logic is encapsulated away from the UI.

Read full post.

Running FFMPEG with WASM in a Web Worker๐Ÿ”—

Reading time: 2 minutes

I love FFMPEG.js, it's a neat tool that is compiled with asm.js`and it let's me build JS web apps that can quickly edit videos. FFMPEG.js also works with web workers so that you can encode videos without blocking the main thread.

I also love Comlink. Comlink let's me easily interact with web workers by exposing functions and classes without having to deal with a complex postMessage state machine.

I recently got to combine the two together. I was experimenting getting FFMPEG exported to Web Assembly (it works - yay) and I wanted to clean up all of the postMessage work in the current FFMPEG.js project. Below is what the code now looks like - I think it's pretty neat. We have one worker that imports ffmpeg.js and comlink and it simply exposes the ffmpeg interface, and then we have the webpage that loads the worker and then uses comlink to create a proxy to the ffmpeg API.

Neat.

worker.js

importScripts('https://cdn.jsdelivr.net/npm/comlinkjs@3.0.2/umd/comlink.js');
importScripts('../ffmpeg-webm.js'); 
Comlink.expose(ffmpegjs, self);

client.html

let ffmpegjs = await Comlink.proxy(worker);
let result = await ffmpegjs({
   arguments: ['-y','-i', file.name, 'output.webm'],
   MEMFS: [{name: file.name, data: data}],
   stdin: Comlink.proxyValue(() => {}),
   onfilesready: Comlink.proxyValue((e) => {
     let data = e.MEMFS[0].data;
     output.src = URL.createObjectURL(new Blob([data]))
     console.log('ready', e)
   }),
   print: Comlink.proxyValue(function(data) { console.log(data); stdout += data + "\n"; }),
   printErr: Comlink.proxyValue(function(data) { console.log('error', data); stderr += data + "\n"; }),
   postRun: Comlink.proxyValue(function(result) { console.log('DONE', result); }),
   onExit: Comlink.proxyValue(function(code) {
     console.log("Process exited with code " + code);
     console.log(stdout);
   }),
});

I really like how Comlink, Workers and WASM compiled modules can play together. I get idiomatic JavaScript that interacts with the WASM module directly and it runs off the main thread.

Read full post.

Translating a blog using Google Cloud Translate and Hugo๐Ÿ”—

Reading time: 4 minutes

I recently returned from a trip to India to attend the Google4India event (report soon) and to meet with a lot of businesses and developers. One of the most interesting changes discussed was the push for more content in the language of the users in the country, and it was particularly apparent across all of Google's products which ranged from making it easier to search in the users language, to find content, and also to read it back to users in either text or voice form.

The entire trip got me thinking. My blog is built with Hugo. Hugo now supports content in written in multiple languages. Hugo is entirely static, so creating new content is matter of just making a new file and letting the build system do it's magic. So maybe I can build something that will make my content more available to more people by running my static content through a translation tool because human translation of content is very expensive.

A couple of hours before my flight back to the UK I created a little script that will take my markdown files and run them through the Google Cloud Translate to create a quick translation of the page that I can then quickly host. The entire solution is presented below. It's a relatively basic processor, it ignores the Hugo preamble it ignores 'code' and it ignores pull quotes - my assumption was to that these are always meant to be left as the way they were written.

Note: It looks like our learning software for translations uses so it's important to mark up your page so the learning tools don't use Google Translated content as input to it's algorithms.

// Imports the Google Cloud client library
const Translate = require('@google-cloud/translate');
const program = require('commander');
const fs = require('fs');
const path = require('path');

program
  .version('0.1.0')
  .option('-s, --source [path]', 'Add in the source file.')
  .option('-t, --target [lang]', 'Add target language.')
  .parse(process.argv);

// Creates a client
const translate = new Translate({
  projectId: 'html5rocks-hrd'
});

const options = {
  to:  program.target,
};

async function translateLines(text) {
  if(text === ' ') return ' ';
  const output = [];
  let results = await translate.translate(text, options);

  let translations = results[0];
  translations = Array.isArray(translations)
    ? translations
    : [translations];

  translations.forEach((translation, i) => {
    output.push(translation)
  });

  return output.join('\n');
};

// Translates the text into the target language. "text" can be a string for
// translating a single piece of text, or an array of strings for translating
// multiple texts.
(async function (filePath, target) {

  const text = fs.readFileSync(filePath, 'utf8');

  const lines = text.split('\n');
  let translateBlock = [];
  const output = [];

  let inHeader = false;
  let inCode = false;
  let inQuote = false;
  for (const line of lines) {
    // Don't translate preampble
    if (line.startsWith('---') && inHeader) { inHeader = false; output.push(line); continue; }
    if (line.startsWith('---')) { inHeader = true; output.push(line); continue; }
    if (inHeader) { output.push(line); continue; }

    // Don't translate code
    if (line.startsWith('```') && inCode) { inCode = false; output.push(line); continue; }
    if (line.startsWith('```')) { inCode = true; output.push(await translateLines(translateBlock.join(' '))); translateBlock = []; output.push(line); continue; }
    if (inCode) { output.push(line); continue; }

    // Dont translate quotes
    if (inQuote && line.startsWith('>') === false) { inQuote = false; }
    if (line.startsWith('>')) { inQuote = true; output.push(await translateLines(translateBlock.join(' '))); translateBlock = []; output.push(line); }
    if (inQuote) { output.push(line); continue; }

    if (line.charAt(0) === '\n' || line.length === 0) { output.push(await translateLines(translateBlock.join(' '))); output.push(line); translateBlock = []; continue;} 

    translateBlock.push(line);
  }

  if(translateBlock.length > 0) output.push(await translateLines(translateBlock.join(' ')))

  const result = output.join('\n');
  const newFileName = path.parse(filePath);
  fs.writeFileSync(`content/${newFileName.name}.${target}${newFileName.ext}`, result);

})(program.source, program.target);

Overall, I am very happy with the process. I understand that the machine translation is not perfect but my thinking is that I can increase the reach of my content to people who might be searching in their own languages and not in English I can increase the discovery surface area of my content and hopefully help more people.

It will take a while to see if this actually helps people, so I will report back when I have more data.... Now to run my script across more of my site :)

Apple - Web apps - All Categories๐Ÿ”—

Reading time: 1 minute

Remember when Web Apps were a recommended way to use apps on the iPhone?

What are web apps? Learn what they are andย how to use them.

Read full post.

In about 2013 Apple started to redirect the /webapps/ top-level directory to /iphone/

The thing is, the directory was actually pretty good, a lot of the apps in there still work today. However looking at the AppStore it solved a lot more problems that developers had: Better discovery and search specifically because the AppStore was directly on the device. The AppStore was also starting to introduce that removed friction from users and developers specifically with regards to payments.

Gears API๐Ÿ”—

Reading time: 1 minute

I'm writing up a blog post about the early Mobile Web API's and Alex Russell reminded me of Google Gears

Gears modules include:

  • LocalServer Cache and serve application resources (HTML, JavaScript, images, etc.) locally
  • Database Store data locally in a fully-searchable relational database
  • WorkerPool Make your web applications more responsive by performing resource-intensive operations asynchronously

Read full post.

I think it is interesting to see that AppCache and WebSQL, Geolocation and WebWorkers came out of the ideas in Google Gears and it's only the latter two that really survived. WebSQL was never broadly supported, and was replaced by IndexedDB; and AppCache replaced by ServiceWorker

RSS Feed to Google Chat Webhook using Cloud Functions for Firebase and Superfeedr๐Ÿ”—

Reading time: 2 minutes

We use Google Chat internally a lot to communicate across our team - it's kinda like our slack; We also create a lot of content that is accessible via RSS feeds, we even have a team feed that you can all view. It wasn't until recently that I found out that it was pretty easy to create a simple post-only bot via WebHooks and that gave me the idea, I can create a simple service that polls RSS feeds and then sends them to our webhook that can post directly in to our team chat.

It was pretty simple in the end, and I've included all the code below. I used Firebase functions - I suspect that this is just as easy on other Function-as-a-service sites - and Superfeedr. Superfeedr is a service that can listen to Pubsubhubbub pings (now WebSub) and it will also poll RSS feeds that don't have Pubsub set up. Then when it finds a feed it will ping a configured URL (in my case my Cloud Function in Firebase) with an XML or JSON representation of the newly found feed data - all you have to do is parse the data and do something with it.

const functions = require('firebase-functions');
const express = require('express');
const cors = require('cors');
const fetch = require('node-fetch');
const app = express();

// Automatically allow cross-origin requests
app.use(cors({ origin: true }));

app.post('/', (req, res) => {
  const { webhook_url } = req.query;
  const { body } = req;
  if (body.items === undefined || body.items.length === 0) {
    res.send('');
    return;
  }

  const item = body.items[0];
  const actor = (item.actor && item.actor.displayName) ? item.actor.displayName : body.title;

  fetch(webhook_url, {
    method: 'POST',
    headers: {
      "Content-Type": "application/json; charset=utf-8",
    },
    body: JSON.stringify({
      "text": `*${actor}* published <${item.permalinkUrl}|${item.title}>. Please consider <https://twitter.com/intent/tweet?url=${encodeURIComponent(body.items[0].permalinkUrl)}&text=${encodeURIComponent(body.items[0].title)}|Sharing it>.`
    })  
  }).then(() => {
    return res.send('ok');
  }).catch(() => {
    return res.send('error')
  });
})
// Expose Express API as a single Cloud Function:
exports.publish = functions.https.onRequest(app);

Read full post.

I was surprised and delighted about how easy it was to set up.

Using HTTPArchive and Chrome UX report to get Lighthouse score for top visited sites in India.

Reading time: 3 minutes

A quick dive in to how to use Lighthouse,HTTPArchive and Chrome UX report to try and understand how users in a country might experience the web. Read More

Getting Lighthouse scores from HTTPArchive for sites in India.

Reading time: 4 minutes

A quick dive in to how to use Lighthouse to try and understand how users in a country might experience the web. Read More

'Moving to a Chromebook' by Rumyra's Blog๐Ÿ”—

Reading time: 3 minutes

Ruth John moved to Chrome OS (temporarily):

The first thing, and possibly the thing with the least amount of up to date information out there, was enabling Crostini. This runs Linux in a container on the Chromebook, something you pretty much want straight away after spending 15 minutes on it.

I have the most recent Pixel, the 256GB version. Here's what you do.

  • Go to settings.
  • Click on the hamburger menu (top left) - right at the bottom it says 'About Chrome OS'
  • Open this and there's an option to put your machine into dev mode
  • It'll restart and you'll be in dev mode - this is much like running Canary over Chrome and possibly turning on a couple of flags. It may crash, but what the hell you'll have Linux capabilities ๏ฟฝ๏ฟฝ
  • Now you can go back into Settings and in regular settings there's a 'Linux apps' option. Turn this on. It'll install Linux. Once this is complete you'll have a terminal open for you. Perfect

Read full post.

Ruth has a great write-up of moving to Chrome OS because her main machine broke.

I moved to Chrome OS full-time 4 months ago (before Google I/O) and only moved to the Mac because I broke my PixelBook (now fixed).

For me it's one of the best web development machines out there today. It's the only device that I can test 'true mobile' on - you can install Chrome on Mobile on it, Firefox Mobile, Samsung Browser, Brave etc via the ARC platform. Crostini is also a game changer for Chrome OS as it brings a lot of the Linux App ecosystem to Chrome OS and it really starts to fill a huge app-gap for me on Chrome OS; I've got Firefox, vim, git, VS Code, Node, npm, all my build tools, GIMP and Inkscape... That's not to say it has been perfect, Crostini could be faster, it's not GPU accelerated yet and it could be more integrated with Filemanager etc, and finally the PixelBook really needs more physical ports - I can attach two 4k screens to it, but I can't charge at the same time.

I think Ruth's wrap up is also quite accurate, the PixelBook is an expensive machine, but I am very very excited to see this coming to more and more devices (especially those at vastly lower price points.)

Would I pay full price for it? I'm not sure I would pay full price for anything on the market right now. Point me in the direction of a system that will run my graphics software and makes a good dev machine (with minimal setup) and lasts more than 18 months, point me in the direction of a worthy investment and I will pay the money.

Yup.

PWA: Progressive Web All-the-things

Reading time: 8 minutes

This blog post discusses the evolution of Progressive Web Apps (PWAs) since their inception in 2015. While PWAs offer numerous benefits like offline functionality, push notifications, and installability, the author observes that adoption hasn't been universal. Many developers and businesses misunderstand PWAs, sometimes treating them as separate products or focusing on single features like push notifications. The post argues that the focus should shift from "apps" to user experience. It proposes a set of principles for modern web experiences: discoverable, safe, fast, smooth, reliable, and meaningful. These principles aim to guide developers towards building better web experiences that naturally embody the core values of PWAs, benefiting both users and businesses. Read More

What are the pain points for web designers? - Mustafa Kurtuldu๐Ÿ”—

Reading time: 3 minutes

Mustafa writes:

Tooling is complicated, we are a tooling focused industry, and they change so much. I have used maybe rough eight different tools, from Photoshop to Sketch. Thatโ€™s before we add prototyping tools to the mix. This may be something we just have to accept. After all, type standards only really started to settle in the 90s, and typography is a 500-year-old discipline.

Designers are still finding it difficult to prove the importance of the process. I think this is something that we have to take on board: to learn how to educate and not just expect everyone to trust us by default. That takes timeโ€Šโ€”โ€Šperhaps using scenario-based design or design workshops like a design sprint would help. Getting non-designers to observe users while using a prototype they created is one of the best experiences I have seen in this field.

Cross-browser support is lacking crucial features. Designers need to understand developer tooling, to better scope out what is possible. I think using paired programming or the design process described above can help.

Responsive design is still challenging. I think this is in part due to the tools we use; I would love Chrome Design Tools that would help turn the browser into a creative tool. This space is where I think the next evolutionary step for site and web app creation is at. Mozilla is doing some fantastic work in this space, with their layout and shapes tooling.

All in all the challenges that we face seem to be all the age-old ones. Process, tools, and respect.

Read full post.

I found this a very interesting post that is also a complement to a post I wrote about the challenges for web developers. It's not surprising that browser compat is an issue, but what is still a concern is that building for IE11 is still something that is holding the industry back. Likewise, Mustafa points out that there is still an issue with the tooling around Responsive Design and the emphasis on a single responsive solution always leads to the following (that is in Mustafa's post):

Designing once and using everywhere is still hard to reach ambition.

This is a problem that I think we all still wrestle with. On one hand we want everyone to build a responsive solution that can serve everyone on every device form-factor, on the other hand user context is important and often the user will only be willing to perform certain actions at certain times; we see this a lot in the retail and commerce industry: people will browse on mobile, and complete on desktop, and the question then becomes do you cater for this multi-modal model more or build a consistent experience across all devices... I suspect the answer is 'it depends', but either way it's a hard problem for everyone from product teams to engineering teams.