Hello.

I am Paul Kinlan.

I lead the Chrome and the Open Web Developer Relations team at Google.

Generated Web Apps

Reading time: 4 minutes

This blog post lists various web apps I've generated using Repl.it and WebSim, Read More

I lead the Chrome Developer Relations team at Google.

We want people to have the best experience possible on the web without having to install a native app or produce content in a walled garden.

Our team tries to make it easier for developers to build on the web by supporting every Chrome release, creating great content to support developers on web.dev, contributing to MDN, helping to improve browser compatibility, and some of the best developer tools like Lighthouse, Workbox, Squoosh to name just a few.

I love to learn about what you are building, and how I can help with Chrome or Web development in general, so if you want to chat with me directly, please feel free to book a consultation.

I'm trialing a newsletter, you can subscribe below (thank you!)

The disposable web

Reading time: 5 minutes

Reflecting on my journey with computers, from the C64 and Amiga 500 to the present day, I've found a renewed excitement in software development. New tools like repl.it and websim.ai empower rapid creation of full-stack, disposable web apps – software built for personal use and easily discarded. This ease of creation removes the barrier to starting projects, making the web an ideal platform for even single-user applications. It's a shift from handcrafted software to a more ephemeral approach, allowing for quicker prototyping and experimentation. Read More

I spent an evening on a fictitious web

Reading time: 2 minutes

Experimented with WebSim, a simulated web environment, creating sites like a personal blog, timezone converter, interactive globe, and a travel site. The experience was reminiscent of the early web's playful exploration and highlighted WebSim's potential for creativity and interactive experiences. Read More

Idly musing about Manifest

Reading time: 1 minute

In this blog post, I share some findings from my exploration of HTTP Archive data. I discovered that a significant number of websites using manifest.json files are using the default configuration generated by Create React App. I plan to investigate this further and determine how prevalent default manifest files are across the web. Read More

Some clean-up new-year

Reading time: 1 minute

I've made a couple of small changes to the blog. I removed the personal journal section and added my projects to the RSS feed so you can see what I've been working on with Generative AI. Happy New Year! Read More

Chat GPT Code Interpreter and Browser Compat Data

Reading time: 5 minutes

I explored using ChatGPT's Code Interpreter to analyze browser compatibility data from the BCD project. My goal was to determine the latest released versions of different browsers. While the initial results weren't perfect, through a few iterations of feedback, the Code Interpreter generated a Python script that accurately extracted the desired information. I was impressed by the speed and efficiency of this process, as it accomplished in minutes what would have taken me much longer manually. The generated code also provided a starting point for further analysis, like visualizing browser release timelines. Despite minor imperfections, the Code Interpreter proved to be a powerful tool for quickly extracting and analyzing data. Read More

IndexedDB as a Vector Database

Reading time: 3 minutes

I created a simple vector database called "Vector IDB" that runs directly in the browser using IndexedDB. It's designed to store and query JSON documents with vector embeddings, similar to Pinecone, but implemented locally. The API is basic with insert, update, delete, and query functions. While it lacks optimizations like pre-filtering and advanced indexing found in dedicated vector databases, it provides a starting point for experimenting with vector search in the browser without relying on external services. The project was a fun way to learn about vector databases and their use with embeddings from APIs like OpenAI. Read More

Bookmarklet: Eyedropper

Reading time: 1 minute

This blog post introduces a bookmarklet utilizing the EyeDropper API for quickly grabbing color information in Chromium-based desktop browsers. The bookmarklet simplifies color selection by opening the eyedropper tool and returning the chosen color's sRGBHex value in an alert box. A link to a related blog post about creating a similar Chrome extension is also included. Read More

Querying browser compat data with a LLM

Reading time: 3 minutes

I explored using LLMs for checking web API browser compatibility. Existing LLMs struggle with outdated data, so I experimented with MDN's Browser Compat Data (BCD). Initial trials using raw BCD JSON with GPT-4 had limitations. To improve this, I converted the BCD into English descriptions of API support and loaded it into a Polymath instance. This allows natural language queries about API compatibility across browsers, like "Is CSS Grid supported in Safari, Firefox, and Chrome?" or "When was CSS acos available in Chrome?". The results are promising, but further refinement is needed to ensure accuracy and reliability. Read More

Building Ask Paul

Reading time: 5 minutes

I built Ask Paul, a generative AI demo that answers front-end web dev questions using my content. It leverages Polymath-AI to index content, find related concepts, and generate summaries by creating embedding vectors, using cosine-similarity, and querying OpenAI. The implementation has a UI, a Polymath Client, and a Polymath Host. It's super cool how accessible this tech is now! Read More

Talk: "Aiming for the future" at Bangor University

Reading time: 3 minutes

I presented "Aiming for the Future" at Bangor University, exploring computing's evolution from the Difference Engine to the modern era, focusing on content/data delivery shifts. I proposed that Machine Learning, especially Generative AI, is the next major computing wave, akin to the Web's rise in the early 2000s, potentially mechanizing mental labor. The Student Expo showcased many final-year projects incorporating AI, from creative tools to practical problem-solving, indicating the growing importance of AI in various fields. Read More

BCD - Experimental APIs bcd

Reading time: 2 minutes

I've added a new feature to time-to-stable that lists experimental APIs across browsers using BCD. This helps developers track experimental APIs, understand their stability, and consider the implications for website integration. It helps answer questions about cross-browser compatibility and potential deprecation, informing decisions about using these APIs. Read More

The local-only web

Reading time: 3 minutes

In this post, I explore the potential of the File System Access API to create local-only web experiences. I discuss how this API, combined with tools like Logseq, allows for on-device data storage outside the browser sandbox. While exciting, I also acknowledge the current limitations, such as the need to re-grant file access on refresh, the lack of a visual indicator for local-only sites, and the difficulty of preventing data exfiltration entirely. Despite these challenges, I believe this area holds significant potential and deserves further exploration. Read More

Support during layoffs

Reading time: 1 minute

I'm offering support to those affected by recent layoffs, including those at Google and across the tech industry. I can help with networking, introductions, LinkedIn recommendations, resume reviews, interview preparation, and just being a listening ear. I've been running support calls for over a year and want to continue helping as much as possible. My calendar is open for bookings if you think I can be of assistance. Read More

Using ML to Create a Simple Lighthouse Audit to Detect a Button

Reading time: 4 minutes

I created a Lighthouse audit that uses machine learning to detect if an anchor tag looks like a button. This involved training a TensorflowJS model, building a custom Lighthouse gatherer to capture high-resolution screenshots, and processing those screenshots to identify anchors styled as buttons. The audit highlights these anchors in the Lighthouse report. The code for the scraper, web app, and Lighthouse audit are available on GitHub. While there are edge cases, this project demonstrates the potential of using ML for visual inspection tasks in web development. Read More

Creating a Lighthouse Gatherer to generate high-res screenshots for your Audit

Reading time: 2 minutes

I needed higher resolution screenshots for an ML model to classify elements on a webpage, but the default Lighthouse screenshot was too compressed. So, I created a custom Lighthouse Gatherer using Puppeteer. This gatherer captures a full-page, high-resolution screenshot encoded as base64 and returns it along with the device pixel ratio. This was a fun little project, and the code is surprisingly concise. However, future Lighthouse versions may include higher-resolution screenshots, making this gatherer redundant. Read More

Creating a web app with Deno, Fresh and TensorflowJS

Reading time: 3 minutes

I built a web app using Deno, Fresh, and TensorFlowJS to classify images as links or buttons. The app uses a pre-trained ML model and allows users to drag and drop multiple images for classification. I encountered challenges with server-side rendering and islands, specifically with integrating a file-drop web component. I also documented the process of integrating the TensorFlowJS model, including model loading and prediction handling. The code is available on GitHub. Read More

Training the Button detector ML model

Reading time: 5 minutes

I trained a machine learning model to differentiate between buttons and links on web pages. Using a dataset of ~3000 button images and ~4000 link images, I trained a convolutional neural network (CNN) with added noise for better generalization. Preprocessing included grayscale conversion, dataset diversification with multilingual sites, and image compression. The model performed well in initial tests, correctly classifying button-like and link-like elements. Next, I'll build a web app for easier testing and a Lighthouse audit for website analysis. Read More

Button and Link Scraping for ML training

Reading time: 5 minutes

In this project, I'm working on an accessibility tool to detect links styled as buttons, a common issue that can confuse users. My approach involves scraping websites to gather images of buttons and links, and then training a machine learning model to distinguish between them. This post focuses on the scraping process using Puppeteer. I encountered challenges like occluded elements and smooth scrolling, which I addressed by checking for occlusion and disabling smooth scrolling. The next step is training the ML image classifier. Read More

Adding ActivityPub to your static site

Reading time: 11 minutes

I added ActivityPub support to my static Hugo blog hosted on Vercel. It now automatically announces new posts to followers on the Fediverse. Key challenges included implementing the ActivityPub protocol for a static site, handling WebFinger discovery, managing Follow/Unfollow requests, and sending signed HTTP requests. I used Vercel Serverless Functions for dynamic request handling and Firebase Firestore for storing follower data. Check out the code and follow me @paul@paul.kinlan.me to see it in action! Read More