When generating apps the spec is important
Generating web apps with AI agents like Replit is incredibly powerful, enabling rapid prototyping and deployment. My experience building tldr.express, a personalized RSS feed summarizer, highlighted the importance of a detailed specification. While initial prompts yielded impressive results, I iteratively refined the app through configuration and additional prompts to address issues like email integration, AI model selection, output formatting, spam prevention, and bot mitigation. This iterative process reinforced that while AI agents excel at rapid generation, a well-defined specification upfront is crucial for a successful outcome. Read More
I lead the Chrome Developer Relations team at Google.
We want people to have the best experience possible on the web without having to install a native app or produce content in a walled garden.
Our team tries to make it easier for developers to build on the web by supporting every Chrome release, creating great content to support developers on web.dev, contributing to MDN, helping to improve browser compatibility, and some of the best developer tools like Lighthouse, Workbox, Squoosh to name just a few.
I love to learn about what you are building, and how I can help with Chrome or Web development in general, so if you want to chat with me directly, please feel free to book a consultation.
I'm trialing a newsletter, you can subscribe below (thank you!)
User Agents Hitting My Site
Curious about who's visiting my site, I built a user-agent tracker using Vercel middleware and KV storage. It logs every request and displays a live table of user agents and hit counts, refreshing every minute. Check out the code on GitHub! Read More
Will we care about frameworks in the future?
Building apps with LLMs and agents like Replit has been incredibly productive. The generated code is often vanilla and repetitive, raising questions about the future of frameworks. While frameworks offer abstractions and accelerate development, LLMs seem to disregard these patterns, focusing on implementation. This shift in software development driven by agents may lead to a world where direct code manipulation is unnecessary. It remains to be seen if frameworks and existing architectural patterns will still be relevant in this LLM-driven future or if new patterns will emerge. Read More
20 years blogging
Wow! Just realized I've been blogging for over 20 years, starting way back in August 2004 on kinlan.co.uk with Blogger. The journey has taken me through Posterous and landed me here on paul.kinlan.me with Hugo (and maybe Jekyll at some point). Sure, there's some cringe-worthy stuff in the archives, but it's my history. And honestly, I wouldn't be where I am today without this little corner of the internet. Huge thanks to Tim Berners-Lee and everyone who's made the web what it is! Read More
Generated Web Apps
This blog post lists various web apps I've generated using Repl.it and WebSim, along with their code. Repl.it apps include tools like an image analyzer, time zone tracker, and a blood pressure tracker. WebSim creations include a 3D globe and gravity simulators. I discuss my preference for Postgres over sqlite, especially with Repl.it's tendency to overwrite sqlite databases upon deployment. Read More
The disposable web
Reflecting on my journey with computers, from the C64 and Amiga 500 to the present day, I've found a renewed excitement in software development. New tools like repl.it and websim.ai empower rapid creation of full-stack, disposable web apps – software built for personal use and easily discarded. This ease of creation removes the barrier to starting projects, making the web an ideal platform for even single-user applications. It's a shift from handcrafted software to a more ephemeral approach, allowing for quicker prototyping and experimentation. Read More
I spent an evening on a fictitious web
Experimented with WebSim, a simulated web environment, creating sites like a personal blog, timezone converter, interactive globe, and a travel site. The experience was reminiscent of the early web's playful exploration and highlighted WebSim's potential for creativity and interactive experiences. Read More
Idly musing about Manifest
In this blog post, I share some findings from my exploration of HTTP Archive data. I discovered that a significant number of websites using manifest.json files are using the default configuration generated by Create React App. I plan to investigate this further and determine how prevalent default manifest files are across the web. Read More
Some clean-up new-year
I've made a couple of small changes to the blog. I removed the personal journal section and added my projects to the RSS feed so you can see what I've been working on with Generative AI. Happy New Year! Read More
Chat GPT Code Interpreter and Browser Compat Data
I explored using ChatGPT's Code Interpreter to analyze browser compatibility data from the BCD project. My goal was to determine the latest released versions of different browsers. While the initial results weren't perfect, through a few iterations of feedback, the Code Interpreter generated a Python script that accurately extracted the desired information. I was impressed by the speed and efficiency of this process, as it accomplished in minutes what would have taken me much longer manually. The generated code also provided a starting point for further analysis, like visualizing browser release timelines. Despite minor imperfections, the Code Interpreter proved to be a powerful tool for quickly extracting and analyzing data. Read More
IndexedDB as a Vector Database
I created a simple vector database called "Vector IDB" that runs directly in the browser using IndexedDB. It's designed to store and query JSON documents with vector embeddings, similar to Pinecone, but implemented locally. The API is basic with insert
, update
, delete
, and query
functions. While it lacks optimizations like pre-filtering and advanced indexing found in dedicated vector databases, it provides a starting point for experimenting with vector search in the browser without relying on external services. The project was a fun way to learn about vector databases and their use with embeddings from APIs like OpenAI.
Read More
Bookmarklet: Eyedropper
This blog post introduces a bookmarklet utilizing the EyeDropper API for quickly grabbing color information in Chromium-based desktop browsers. The bookmarklet simplifies color selection by opening the eyedropper tool and returning the chosen color's sRGBHex value in an alert box. A link to a related blog post about creating a similar Chrome extension is also included. Read More
Querying browser compat data with a LLM
I explored using LLMs for checking web API browser compatibility. Existing LLMs struggle with outdated data, so I experimented with MDN's Browser Compat Data (BCD). Initial trials using raw BCD JSON with GPT-4 had limitations. To improve this, I converted the BCD into English descriptions of API support and loaded it into a Polymath instance. This allows natural language queries about API compatibility across browsers, like "Is CSS Grid supported in Safari, Firefox, and Chrome?" or "When was CSS acos available in Chrome?". The results are promising, but further refinement is needed to ensure accuracy and reliability. Read More
Building Ask Paul
I built Ask Paul, a generative AI demo that answers front-end web dev questions using my content. It leverages Polymath-AI to index content, find related concepts, and generate summaries by creating embedding vectors, using cosine-similarity, and querying OpenAI. The implementation has a UI, a Polymath Client, and a Polymath Host. It's super cool how accessible this tech is now! Read More
Talk: "Aiming for the future" at Bangor University
I presented "Aiming for the Future" at Bangor University, exploring computing's evolution from the Difference Engine to the modern era, focusing on content/data delivery shifts. I proposed that Machine Learning, especially Generative AI, is the next major computing wave, akin to the Web's rise in the early 2000s, potentially mechanizing mental labor. The Student Expo showcased many final-year projects incorporating AI, from creative tools to practical problem-solving, indicating the growing importance of AI in various fields. Read More
BCD - Experimental APIs bcd
I've added a new feature to time-to-stable that lists experimental APIs across browsers using BCD. This helps developers track experimental APIs, understand their stability, and consider the implications for website integration. It helps answer questions about cross-browser compatibility and potential deprecation, informing decisions about using these APIs. Read More
The local-only web
In this post, I explore the potential of the File System Access API to create local-only web experiences. I discuss how this API, combined with tools like Logseq, allows for on-device data storage outside the browser sandbox. While exciting, I also acknowledge the current limitations, such as the need to re-grant file access on refresh, the lack of a visual indicator for local-only sites, and the difficulty of preventing data exfiltration entirely. Despite these challenges, I believe this area holds significant potential and deserves further exploration. Read More
Support during layoffs
I'm offering support to those affected by recent layoffs, including those at Google and across the tech industry. I can help with networking, introductions, LinkedIn recommendations, resume reviews, interview preparation, and just being a listening ear. I've been running support calls for over a year and want to continue helping as much as possible. My calendar is open for bookings if you think I can be of assistance. Read More
Using ML to Create a Simple Lighthouse Audit to Detect a Button
I created a Lighthouse audit that uses machine learning to detect if an anchor tag looks like a button. This involved training a TensorflowJS model, building a custom Lighthouse gatherer to capture high-resolution screenshots, and processing those screenshots to identify anchors styled as buttons. The audit highlights these anchors in the Lighthouse report. The code for the scraper, web app, and Lighthouse audit are available on GitHub. While there are edge cases, this project demonstrates the potential of using ML for visual inspection tasks in web development. Read More
I needed higher resolution screenshots for an ML model to classify elements on a webpage, but the default Lighthouse screenshot was too compressed. So, I created a custom Lighthouse Gatherer using Puppeteer. This gatherer captures a full-page, high-resolution screenshot encoded as base64 and returns it along with the device pixel ratio. This was a fun little project, and the code is surprisingly concise. However, future Lighthouse versions may include higher-resolution screenshots, making this gatherer redundant. Read More