I explored using ChatGPT's Code Interpreter to analyze browser compatibility data from the BCD project. My goal was to determine the latest released versions of different browsers. While the initial results weren't perfect, through a few iterations of feedback, the Code Interpreter generated a Python script that accurately extracted the desired information. I was impressed by the speed and efficiency of this process, as it accomplished in minutes what would have taken me much longer manually. The generated code also provided a starting point for further analysis, like visualizing browser release timelines. Despite minor imperfections, the Code Interpreter proved to be a powerful tool for quickly extracting and analyzing data.
I created a Lighthouse audit that uses machine learning to detect if an anchor tag looks like a button. This involved training a TensorflowJS model, building a custom Lighthouse gatherer to capture high-resolution screenshots, and processing those screenshots to identify anchors styled as buttons. The audit highlights these anchors in the Lighthouse report. The code for the scraper, web app, and Lighthouse audit are available on GitHub. While there are edge cases, this project demonstrates the potential of using ML for visual inspection tasks in web development.
I needed higher resolution screenshots for an ML model to classify elements on a webpage, but the default Lighthouse screenshot was too compressed. So, I created a custom Lighthouse Gatherer using Puppeteer. This gatherer captures a full-page, high-resolution screenshot encoded as base64 and returns it along with the device pixel ratio. This was a fun little project, and the code is surprisingly concise. However, future Lighthouse versions may include higher-resolution screenshots, making this gatherer redundant.
I needed to find a way to send webhooks after a successful deployment on Vercel, which wasn't a built-in feature. Since Vercel integrations can listen for deployment events, I created one to solve this. It's a simple tool hosted on GitHub that lets you set up custom webhooks for your Vercel projects. It's not on the Vercel Marketplace, and it's more of a workaround until Vercel natively supports deployment webhooks. Check out the GitHub repo for instructions on setting it up with Firebase Firestore.
This blog post explores how machine learning (ML) can enhance the developer experience. Inspired by Corridor Crew's use of ML in VFX, I initially brainstormed ways ML could automate tedious developer tasks, like accessibility improvements and performance optimization. I also considered ML's potential for generating layouts and images. The emergence of tools like GitHub Copilot and DALL-E-2 significantly impacted my thinking, especially regarding the future of software development and my role as a DevRel lead. Ultimately, the transformative power of GPT-Chat, demonstrated through its ability to generate webpage layouts and populate them with images based on simple prompts, left me questioning the future of my profession and considering the role I might play in training the next generation of AI tools.
I've created Puppeteer Go, a small JavaScript library to simplify the process of creating CLI utilities with Puppeteer. It handles the boilerplate of launching the browser, opening a tab, navigating to a URL, performing a specified action, and cleaning up. This post demonstrates its usage by taking multiple screenshots of elements on a page, inspired by Ire Aderinokun's work. Examples include capturing screenshots of h1 elements on my blog and feature blocks on caniuse.com.
In a previous post, I discussed screen recording from Android. This post details how I automated the process of adding a device frame to those recordings, making them look more professional. Previously, this was a tedious manual process involving Screenflow, but now I've automated it using ffmpeg. The ffmpeg command scales the screen recording and overlays it onto a background image of a device frame. The code, available on GitHub, handles the entire process, including setting up the Android device for recording, pulling the recording, and applying the frame. While the current solution works well, I'm open to suggestions for improvement from ffmpeg experts.
This post announces the implementation of automatic deployment for this blog, powered by Jekyll (Octopress) and GitHub WebHooks. The previous workflow involved local editing, committing to GitHub, and deploying via SSH using rake deploy. The new process leverages GitHub WebHooks and a modified version of Github-Auto-Deploy to automatically pull, build, and deploy changes upon pushing to the GitHub repository, simplifying the deployment process and eliminating the need for terminal access and SSH keys.
Installing Chrome for Android directly onto an emulator isn't possible, as it's only available via the Play Store. However, you can install the Chromium Test Shell, an open-source, functional version of Chromium without Chrome's usual interface. Although it lacks features like bookmarking and sync, it supports remote debugging. Find recent builds online and install them via adb. I've even created a script to automate downloading, extracting, and installing the latest Chromium Test Shell build, available on GitHub.
I'm looking for a few web services that don't seem to exist yet. First, a way to save my Twitter favorites to Instapaper (or similar services). Second, a webhook that sends content to Instapaper, as I dislike relying on third-party app integrations. Finally, a service that sends full RSS feed content directly to my email inbox in near real-time using pubsubhubbub. Existing services only send partial archives. If I can't find these, I might build them as open-source projects.
I'm disappointed with Blogger BackLinks. I thought they'd automatically pull in links to my posts from Google Blog Search, but they don't. Users have to manually add them, which isn't ideal. I plan to create a version that automatically queries Google Blog Search and updates my page with backlinks from there.