Describing sites instead of coding them
It might be possible to build sites entirely from a simple description. Read More
I lead the Chrome Developer Relations team at Google.
We want people to have the best experience possible on the web without having to install a native app or produce content in a walled garden.
Our team tries to make it easier for developers to build on the web by supporting every Chrome release, creating great content to support developers on web.dev, contributing to MDN, helping to improve browser compatibility, and some of the best developer tools like Lighthouse, Workbox, Squoosh to name just a few.
I love to learn about what you are building, and how I can help with Chrome or Web development in general, so if you want to chat with me directly, please feel free to book a consultation.
I'm trialing a newsletter, you can subscribe below (thank you!)
Dion Almaer: English will become the most popular development language in 6 years — 🔗
This great post by Dion "English will become the most popular development language in 6 years" is worth a mull imho.
There's obviously a lot of push back on LLM's be it what they were trained on and how much energy they use. However the technology is here, and Dion poses a great question: Will natural language become the way people control their computers?
Two things resonated with me:
The reason that we see so many applications pop up with a chat side bar is a signal that we are building bridges between the computers and the humans in natural language ways.
and
Future: your English is the source, and as your computer systems improve, they can be regenerating new and improved implementations. It behooves you to invest in testing and validation in this world, but this is something that is actually really needed any way… we just sometimes get away without doing it.
I maintain a list of apps that I'm happy for people to use. I've also got a huge list of disposable apps that I've built for myself. I'm pretty certain that "natural language" as the main development language will happen at some point and as it developes millions more people will have the ability to control their compute in ways that echo how Spreadsheets enabled people to manipulate their data in their businesses.
Dion centers some of his discussion on Chat-first vs Spec-first. I agree that the Spec is important, I just don't know how this part will develop. Do we develop the spec completely up-front, or is there a chat-like assistant that accretes the spec as we develop. I'm thinking the later, I can imagine a world where we have a set of critics that attempt to objectively look at your spec and tell you what's missing or what could be developed further.
Email - The Web's Forgotten Medium
I hope this email finds you well. Read More
Webkit.org: The success of Interop 2024! — 🔗
Link: https://webkit.org/blog/16413/the-success-of-interop-2024
I saw The success of Interop 2024! in Stefan Judis's Web Weekly Newsletter.
Jen Simmons at Apple on WebKit pulled together this great post about the progress that has been made in 2024.
In 2024, there were 17 such focus areas: Accessibility, CSS Nesting, Custom Properties, Declarative Shadow DOM, font-size-adjust, HTTPS URLs for WebSocket, IndexedDB, Layout, Pointer and Mouse Events, popover, Relative Color Syntax, requestVideoFrameCallback, Scrollbar Styling, @starting-style & transition-behavior, Text Directionality, text-wrap: balance and URL.
The value of Interop is so important. It's the reason that the web has been so successful. It's the reason that we can build things that work across all devices and all browsers. It's the reason that we can build things that work for everyone and I'm grateful for the collaboration between all the browser vendors on this.
We just have to be careful to say "the web is XX% interoperable" when the data in the interop project is a percentage of the shared focus areas, not of the entire platform. The dashboard is pretty clear, the actual situation of wider interoperability has a long way to go.
Either way, we should celebrate this progress. The web is getting better.
Simon Willison: My approach to running a link blog — 🔗
Link: My approach to running a link blog
I really like Simon's approach to running a link blog and his principles really resonate with me
I always include the names of the people who created the content I am linking to, if I can figure that out. Credit is really important, and it’s also useful for myself because I can later search for someone’s name and find other interesting things they have created that I linked to in the past. If I’ve linked to someone’s work three or more times I also try to notice and upgrade them to a dedicated tag.
Lifting people up is something that I've always valued (and valued when folks did it on my content). I probably lost my way at the start of my DevRel career - parts of the DevRel job ladder include being Industry influential. I took that to mean being an expert in web development and while I think I'm reasonable and I've built a great team, I love seeing other people succeed and I love sharing their work.
I try to add something extra. My goal with any link blog post is that if you read both my post and the source material you’ll have an enhanced experience over if you read just the source material itself.
This was actually something I struggled with in my first iteration of my link blog. I'm still not sure I can always provide more value than the original author but also I have a hunch that linking out of sites is a dying art.
Simon also had a bit about the technology behind his link blog:
The technology behind my link blog is probably the least interesting thing about it. It’s part of my simonwillisonblog Django application—the main model is called Blogmark and it inherits from a BaseModel defining things like tags and draft modes that are shared across my other types of content (entries and quotations).
This blog is entirely static (Hugo) and I've been butting my head up against the wall. Static is neat, but it's not enough. If you want to add Activity Pub, well you have to bend Hugo a long way. Add a link blog? Well, that's not too hard given it's structure but it also means having to make a full git-commit to the repo, and this was something that slowed me down last time.
When generating apps the spec is important
Generating web apps with AI agents like Replit is incredibly powerful, enabling rapid prototyping and deployment. My experience building tldr.express, a personalized RSS feed summarizer, highlighted the importance of a detailed specification. While initial prompts yielded impressive results, I iteratively refined the app through configuration and additional prompts to address issues like email integration, AI model selection, output formatting, spam prevention, and bot mitigation. This iterative process reinforced that while AI agents excel at rapid generation, a well-defined specification upfront is crucial for a successful outcome. Read More
User Agents Hitting My Site
Curious about who's visiting my site, I built a user-agent tracker using Vercel middleware and KV storage. It logs every request and displays a live table of user agents and hit counts, refreshing every minute. Check out the code on GitHub! Read More
Will we care about frameworks in the future?
Building apps with LLMs and agents like Replit has been incredibly productive. The generated code is often vanilla and repetitive, raising questions about the future of frameworks. While frameworks offer abstractions and accelerate development, LLMs seem to disregard these patterns, focusing on implementation. This shift in software development driven by agents may lead to a world where direct code manipulation is unnecessary. It remains to be seen if frameworks and existing architectural patterns will still be relevant in this LLM-driven future or if new patterns will emerge. Read More
20 years blogging
Wow! Just realized I've been blogging for over 20 years, starting way back in August 2004 on kinlan.co.uk with Blogger. The journey has taken me through Posterous and landed me here on paul.kinlan.me with Hugo (and maybe Jekyll at some point). Sure, there's some cringe-worthy stuff in the archives, but it's my history. And honestly, I wouldn't be where I am today without this little corner of the internet. Huge thanks to Tim Berners-Lee and everyone who's made the web what it is! Read More
Generated Web Apps
This blog post lists various web apps I've generated using Repl.it and WebSim, Read More
The disposable web
Reflecting on my journey with computers, from the C64 and Amiga 500 to the present day, I've found a renewed excitement in software development. New tools like repl.it and websim.ai empower rapid creation of full-stack, disposable web apps – software built for personal use and easily discarded. This ease of creation removes the barrier to starting projects, making the web an ideal platform for even single-user applications. It's a shift from handcrafted software to a more ephemeral approach, allowing for quicker prototyping and experimentation. Read More
I spent an evening on a fictitious web
Experimented with WebSim, a simulated web environment, creating sites like a personal blog, timezone converter, interactive globe, and a travel site. The experience was reminiscent of the early web's playful exploration and highlighted WebSim's potential for creativity and interactive experiences. Read More
Idly musing about Manifest
In this blog post, I share some findings from my exploration of HTTP Archive data. I discovered that a significant number of websites using manifest.json files are using the default configuration generated by Create React App. I plan to investigate this further and determine how prevalent default manifest files are across the web. Read More
Some clean-up new-year
I've made a couple of small changes to the blog. I removed the personal journal section and added my projects to the RSS feed so you can see what I've been working on with Generative AI. Happy New Year! Read More
Chat GPT Code Interpreter and Browser Compat Data
I explored using ChatGPT's Code Interpreter to analyze browser compatibility data from the BCD project. My goal was to determine the latest released versions of different browsers. While the initial results weren't perfect, through a few iterations of feedback, the Code Interpreter generated a Python script that accurately extracted the desired information. I was impressed by the speed and efficiency of this process, as it accomplished in minutes what would have taken me much longer manually. The generated code also provided a starting point for further analysis, like visualizing browser release timelines. Despite minor imperfections, the Code Interpreter proved to be a powerful tool for quickly extracting and analyzing data. Read More
IndexedDB as a Vector Database
I created a simple vector database called "Vector IDB" that runs directly in the browser using IndexedDB. It's designed to store and query JSON documents with vector embeddings, similar to Pinecone, but implemented locally. The API is basic with insert
, update
, delete
, and query
functions. While it lacks optimizations like pre-filtering and advanced indexing found in dedicated vector databases, it provides a starting point for experimenting with vector search in the browser without relying on external services. The project was a fun way to learn about vector databases and their use with embeddings from APIs like OpenAI.
Read More
Bookmarklet: Eyedropper
This blog post introduces a bookmarklet utilizing the EyeDropper API for quickly grabbing color information in Chromium-based desktop browsers. The bookmarklet simplifies color selection by opening the eyedropper tool and returning the chosen color's sRGBHex value in an alert box. A link to a related blog post about creating a similar Chrome extension is also included. Read More
Querying browser compat data with a LLM
I explored using LLMs for checking web API browser compatibility. Existing LLMs struggle with outdated data, so I experimented with MDN's Browser Compat Data (BCD). Initial trials using raw BCD JSON with GPT-4 had limitations. To improve this, I converted the BCD into English descriptions of API support and loaded it into a Polymath instance. This allows natural language queries about API compatibility across browsers, like "Is CSS Grid supported in Safari, Firefox, and Chrome?" or "When was CSS acos available in Chrome?". The results are promising, but further refinement is needed to ensure accuracy and reliability. Read More
Building Ask Paul
I built Ask Paul, a generative AI demo that answers front-end web dev questions using my content. It leverages Polymath-AI to index content, find related concepts, and generate summaries by creating embedding vectors, using cosine-similarity, and querying OpenAI. The implementation has a UI, a Polymath Client, and a Polymath Host. It's super cool how accessible this tech is now! Read More
Talk: "Aiming for the future" at Bangor University
I presented "Aiming for the Future" at Bangor University, exploring computing's evolution from the Difference Engine to the modern era, focusing on content/data delivery shifts. I proposed that Machine Learning, especially Generative AI, is the next major computing wave, akin to the Web's rise in the early 2000s, potentially mechanizing mental labor. The Student Expo showcased many final-year projects incorporating AI, from creative tools to practical problem-solving, indicating the growing importance of AI in various fields. Read More