In a recent project building a web push service I wanted to have my UI respond to application level events (semantically if you will) because there were a couple of components that require information from the system but are not dependent with each other and I wanted them to be able to manage themselves independently of the ‘business logic’. I looked around at lots of different tools to help me, but because I frequently have a heavy case of NIH syndrome and the fact that I think people can implement their own infrastructural elements pretty quickly, I decided to quickly knock-up a simple client-side PubSub service — it worked pretty well for my needs.
The other week I talked about Face Detection via the Shape API that is in the Canary channel in Chrome. Now barcode detection is in Chrome Canary too (Miguel is my hero ;) Barcodes are huge! they are on nearly every product we buy. Even the much maligned QRCode is huge outside of the US and Europe. The barcode and the QRcode provide a simple way for you to bridge the physical world and the digital world by transferring small amounts of data between the medium and you.
I recently built a Progressive Web App that takes a screencast from your Android device and then wraps the video in a device frame using FFMPEG.js like so: I also managed to sort out building ffmpeg.js so that with relative ease, create custom optimized builds of ffmpeg and run it in the browser. The two things together I think present a lot of opportunities to build some great new small Progressive Web Apps that push what we think the web is capable of with regards to manipulating audio and video.
FFMPEG.js is an amazing project and it helped me building one of my latest projects: Device Frame. It basically builds ffmpeg (with a good set of defaults to keep the size small — as small as it can be). If the default build doesn’t support the filters and encoders you need, then you will need to build it yourself. This is more of a note for me in the future, but this is what I did to get it working.
As anyone who works for a US based company but lives in the UK knows, Thanksgiving is a wonderful time of the year. It is that point in the year when we can actually get work done without a barrage of emails hitting us in the morning and in the evening. This Thanksgiving free-time I wanted to knock a project off my to-do list that had been sitting around for a while: a generic web-push web-hook end point.
This is more for my own future reference and noodling with. I converted it from the aco file with https://github.com/websemantics/Color-Palette-Toolkit Pomegranate #f44336 Lavender blush #ffebee Pastel Pink #ffcdd2 Sea Pink #ef9a9a Sunglo #e57373 Burnt Sienna #ef5350 Cinnabar #e53935 Persian Red #d32f2f Tall Poppy #c62828 Thunderbird #b71c1c Vivid Tangerine #ff8a80
Yesterday I posted about an update to my Service Worker caching strategy. If you look at my ServiceWorker you will see that there is more to it than just the fix I had to make for storing data in the Cache. I have also introduced a URL routing framework to simplify my logic in the service worker when dealing with different kinds of requests. For example, I don’t want to cache requests to Google Analytics or Disquss, and rather than make my onfetch handler a lot more complex, it was easier to be declarative about the routes that I wanted to manage and then control the logic for those independently from the other routes.
About 5 months ago I documented my Service Worker caching strategy and it was noted that it wouldn’t work in Firefox because of my use of waitUntil. It was also noted that, well, my Service Worker didn’t actually work. It worked for me or so I thought, but every so often on a new page you could see it error and then quickly re-fetch from the network. I made a number of changes to make the code more readable, however I didn’t solve the actual issue and it turns out my understanding of cache.
I have been running a small service on Google Compute Engine (Ubuntu) that requires the google-cloud npm module but I kept hitting an error with grpc_node.node not being found. Error: Cannot find module '/home/paul_kinlan/web-push-rocks/frontend/node_modules/google-cloud/node_modules/grpc/src/node/extens ion_binary/grpc_node.node' at Function.Module._resolveFilename (module.js:469:15) at Function.Module._load (module.js:417:25) at Function._load (/usr/lib/node_modules/pm2/node_modules/pmx/lib/transaction.js:62:21) at Module.require (module.js:497:17) at require (internal/module.js:20:19) at Object.<anonymous> (/home/paul_kinlan/web-push-rocks/frontend/node_modules/google-cloud/node_modules/grpc/src/node/src/gr pc_extension.js:38:15) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) at Module.load (module.js:487:32) at tryModuleLoad (module.js:446:12) It was incredibly frustrating as I have not seen any recognition of the issues that many people are facing.
I was at the party of the Chrome Dev Summit and Miguel Casas-Sanchez on the Chrome team came up to me and said “Hey Paul, I have a demo for you”. Once I saw it, I had to get it into my talk. That API was the Shape Detection API that is currently in the WICG in an incubation and experimentation phase and is a nice incremental addition to the platform.
I like Web Components. It has taken a long time to get here but things are moving in the correct direction with Safari shipping Shadow DOM and now landing support for Custom Elements. I’ve been thinking a lot recently about Web Components, that is custom elements, template, Shadow DOM and CSS variables, specifically I have been focusing some of my thoughts on custom element space and how this can play out on the web in the future because I believe there are lots of interesting possibilities with how the usage of them will evolve over time.
Autofill has a chequered history filled with what I believe is a mild case of FUD. Chrome for the longest time decided to ignore autocomplete=off on forms and fields because we believed that autocomplete provides a huge amount of value for users especially in the context of mobile. One of the problems is that is incredibly hard to measure how impactful autocomplete is to your site. There aren’t really any events that happens when “autocomplete” occurs, so how do you measure what you tell has happened?
It was my eldest son’s birthday the other day, and it was late in the evening on said Birthday and I thought “I will give my son the gift of learning how to program”. It worked as I expected, he looked at me, wrinkled his nose, and got back to playing Fifa sitting next to me whilst I smashed out a terrible game on the amazing microbit. My quick summary is that I think it is an amazing litle device for quickly starting programming and getting into programming with hardware.
In my trials and tribulations to detect when a field has been autofilled, I need to create a shim for monitorEvents so that I can see the event life-cycle of that element and ultimately try to debug it. One thing that I found is that monitorEvents requires an element but for what I am doing I know that there will be an element with an id at some point but I don’t know when it will be created.
I’ve recently started researching autofill and what hints that browsers give to developers that they have automatically filled in a form field on the users behalf. Blink and WebKit browsers have a special CSS pseudo class that you can look at (more in another post), but firefox doesn’t. There must be some event!!! Chrome DevTools has a handy helper function called monitorEvents, you call it with an element as an argument and it will then log to the console all the events that happen on that element.
Many of you know that I am passionate about inter-app communications, specifically the action of sharing. One of the things that I have encouraged anyone who wants to do the next version of Web Intents to do is focus on a very small and specific use case. Well Good News Everybody. Matt Giuca on the Chrome team has been working on a simple API (Web Share) that has the potential to connect websites with native apps and also and it is in Chrome Dev Channel on Android to test.
Owen Campbell-Moore, one of Chrome’s PM’s for Progressive Web Apps and new APIs asked the following question, and instantly Surma (that is the only name we know for him) said “Sockets” @owencm Network connections. Like writing an SSH client as a PWA. — Surma (@DasSurma) August 12, 2016 I also threw in my two pennies, and Marcos Ceres asked for use-cases. @annevk @Paul_Kinlan @DasSurma @owencm I'd still be interested in a good list of fun things that people want to build but can't.
Do we need a browser in the future?
I wrote about screen recording from Android a little while ago and whilst it is cool, I didn’t document anything of the process that I had to get it into the device frame and make the screen recordings look all “profesh”. The process in the past was pretty cumbersome, I would grab the screen recording using my script and then use Screenflow to overlay the video on the device frame and then do export that out to the mp4 followed by a quick bit of GIF hakery.
Wrinkles, Crinkles and lumpy bits.