All of Igalia: Planet Igalia — 🔗
Link: Planet Igalia
When I was preparing the Interop post yesterday, I found one of the agalia links didn't work and while I was hunting for the new link, I found Planet Igalia. :mind-blown:
I've always wondered what's happening at Igalia and while I love the podcast, I do love a good Blog.... I'm really not sure how I missed this entire part of the site, but it's go so much good information and often frequent deep dives in to browsers. For example Maxsim Sisov's post on "Unmasking Hidden Floating-Point Errors in Chromium’s Ozone/Wayland" - such an awesome read into an area I had zero knowledge about.
Triff and marv.
I lead the Chrome Developer Relations team at Google.
We want people to have the best experience possible on the web without having to install a native app or produce content in a walled garden.
Our team tries to make it easier for developers to build on the web by supporting every Chrome release, creating great content to support developers on web.dev, contributing to MDN, helping to improve browser compatibility, and some of the best developer tools like Lighthouse, Workbox, Squoosh to name just a few.
I love to learn about what you are building, and how I can help with Chrome or Web development in general, so if you want to chat with me directly, please feel free to book a consultation.
I'm trialing a newsletter, you can subscribe below (thank you!)
Nicholas C. Zakas: ESLint now officially supports linting of CSS - ESLint - Pluggable JavaScript Linter — 🔗
Link: ESLint now officially supports linting of CSS - ESLint - Pluggable JavaScript Linter
We feel that validation and enforcing baseline features are the minimum that a linter needs to support in 2025, and so we spent a lot of time making the no-invalid-at-rules, no-invalid-properties, and require-baseline rules as comprehensive as possible.
Love it!! Absolutely love it.
One of the areas that we would love to see more of (and we hope to work) is more integration of Baseline in to developer workflows. That could be Linters (like ESLint), boot-strappers, IDEs, compilers, frameworks, DevTools. You name it, I think it would help the entire ecosystem if we could get see increase awareness and usage of Baseline and therefore adoption of the features that all the browsers implement natively in to the hands of developers.
Exciting times.
Lokesh Khurana: How Google Chrome’s autofill feature helps both shoppers and merchants — 🔗
Link: How Google Chrome’s autofill feature helps both shoppers and merchants
One of Shopify’s key metrics is Checkout Conversion Rate (CCR), which measures the rate of successful checkouts completed over a period of time. Through testing, Shopify found that removing unnecessary steps led to more customers completing their checkouts. Guest checkouts using autofill had a 45% higher CCR than guest checkouts without autofill. Basically, the customers who didn’t have to spend time filling out forms were more likely to actually buy something at the end of their online shopping trip.
Emphasis mine, but this is incredible and is something that I think has gone under the radar.
For the longest time it was thought that there was no way to measure the impact of autofill on the web. In 2016 I developed a small snippet to detect when autofill happens and while no one paid attention, it's good to see that with the introduction of CSS :autofill
pseudo-class in Baseline, we can create some simple JS that will more effectively detect when autofill happens.
There's also now better autofill tooling in Chrome DevTools, so my hope is that more people will start to understand the impact of autofill on their sites and start to optimize for it....
... maybe.. There is still a lot of concern in the ecosystem about autocomplete=off
and a lot of people that don't want the user's agent to control the autocomplete experience.
Interop 2025: another year of web platform improvements — 🔗
Link: Interop 2025: another year of web platform improvements
It's exciting to see the web platform continue to evolve and improve. Interop along with Baseline are aimed to solve some of the top challenges that developers have when building for the web.
Ultimately we only make progress as a platform when the major user-agents make shared and consistent progress on the platform and this has been a huge effort from the teams at Google, Microsoft, Mozilla, and Apple, not to mention Igalia and Bocoup as well as all the developers that have helped to prioritize what is important to them.
Here are the announcements from other browsers:
While I've been following the progress on Interop I hadn't noticed that they are looking at new areas of investment:
In addition, and as in previous years, there's a set of areas for investigation. These are areas where we don't have enough information or tests to include as a focus area, but the group feels some work should be done to get them to a stage where we can include them.
- Accessibility testing
- Gamepad API testing
- Mobile testing
- Privacy testing
- WebVTT
This feels like a good set of areas to focus on. I'm particularly interested in the Accessibility testing and Privacy testing. I think these are areas that are often overlooked and are critical to the success of the web platform.
Finally, I encourage everyone to look towards the tooling that is available because of the work that the Interop group is doing to understand all of the web-features and their availability across the platform.
You can follow along on the Interop 2025 dashboard at https://wpt.fyi/interop-2025 and as things become Baseline Newly available they'll show up in the Baseline 2025 list on webstatus.dev too.
WikiTok — 🔗
Link: WikiTok
This is such an amazing site and has become a bit of a daily habit for me. It's a brilliantly simple idea that means I'm browsing more of Wikipedia than I ever have before.
I built a similar demo a while ago using the now-defunct Portals API because I want to explore what a Web Browser could be if it had a UI like TikTok. My hypothesis was that while links are amazing, what is behind them is hidden, and worse (imo) it's behind a banner image that is often not representative of the content.
So the idea was that as long as you had a list of sites that you might visit it might be neat to have a way to flick between them quickly and get an idea if you actually want to dive into the content. The portals API seemed like a good way to do it because it could run JS, but was non-interactive and in theory privacy-preserving.
I adapted it in mid-2024 to use generated screenshots of the pages. I also thought I should give it a snappy name. Flick the Web's code is here and the you can play with the demo that runs over the latest articles on HackerNews.
I'm really glad that WikiTok did this and also it has been open sourced as I can totally imagine that this type of UI takes off on the web.
How I edit my blogs
A little bit about how I edit my blog: It's Hugo, on Vercel with a custom editor I built. Read More
Addy Osmani: Why I use Cline for AI Engineering - by Addy Osmani — 🔗
Link: Why I use Cline for AI Engineering - by Addy Osmani
In this post Addy describes his use of Cline (Jan 30). It was the first time I'd heard of it. I was kinda surprised because I've been on top of tooling for a while now.
For the longest time I'd been using Replit, it had a nice flow to it. I could ask it to help me solve a problem and it would just apply the code to the project. For me, it was a nice way to get things done.
I liked Replit, I built a lot of apps with it, however their hosting is expensive and their new checkpoint based pricing model frustrated me, especially when it made very clear mistakes.
At the time (like two weeks ago) Copilot didn't apply any changes to the code, it just suggested things which I had to then copy and paste... and even then the suggestions just weren't that good.
Cline has replaced both Copilot and Replit.
I don't mind paying for the use of an LLM and as noted by Addy:
I’ve also deeply appreciated Cline’s proactive accounting of cost during a session. This is most notable when switching between model providers:
I love that I can see where the costs are coming from. I can see how much I am spending on the model and how much I am spending on the compute. It's a nice touch.
When you add this with something else Addy points out:
The DeepSeek-R1 + Sonnet hybrid approach Recent benchmarks and user experiences have shown that combining DeepSeek-R1 for planning with Claude 3.5 Sonnet for implementation can reduce costs by up to 97% while improving overall output quality.
It's a pretty compelling solution.
My own workflow at the moment has be to plan with R1 (via OpenRouter) and then implement with Sonnet. I personally don't look at the costs, but I do inspect the output a lot and this has felt like a pretty good balance for my personal projects.
Describing sites instead of coding them
It might be possible to build sites entirely from a simple description. Read More
Dion Almaer: English will become the most popular development language in 6 years — 🔗
This great post by Dion "English will become the most popular development language in 6 years" is worth a mull imho.
There's obviously a lot of push back on LLM's be it what they were trained on and how much energy they use. However the technology is here, and Dion poses a great question: Will natural language become the way people control their computers?
Two things resonated with me:
The reason that we see so many applications pop up with a chat side bar is a signal that we are building bridges between the computers and the humans in natural language ways.
and
Future: your English is the source, and as your computer systems improve, they can be regenerating new and improved implementations. It behooves you to invest in testing and validation in this world, but this is something that is actually really needed any way… we just sometimes get away without doing it.
I maintain a list of apps that I'm happy for people to use. I've also got a huge list of disposable apps that I've built for myself. I'm pretty certain that "natural language" as the main development language will happen at some point and as it developes millions more people will have the ability to control their compute in ways that echo how Spreadsheets enabled people to manipulate their data in their businesses.
Dion centers some of his discussion on Chat-first vs Spec-first. I agree that the Spec is important, I just don't know how this part will develop. Do we develop the spec completely up-front, or is there a chat-like assistant that accretes the spec as we develop. I'm thinking the later, I can imagine a world where we have a set of critics that attempt to objectively look at your spec and tell you what's missing or what could be developed further.
Email - The Web's Forgotten Medium
I hope this email finds you well. Read More
Webkit.org: The success of Interop 2024! — 🔗
Link: https://webkit.org/blog/16413/the-success-of-interop-2024
I saw The success of Interop 2024! in Stefan Judis's Web Weekly Newsletter.
Jen Simmons at Apple on WebKit pulled together this great post about the progress that has been made in 2024.
In 2024, there were 17 such focus areas: Accessibility, CSS Nesting, Custom Properties, Declarative Shadow DOM, font-size-adjust, HTTPS URLs for WebSocket, IndexedDB, Layout, Pointer and Mouse Events, popover, Relative Color Syntax, requestVideoFrameCallback, Scrollbar Styling, @starting-style & transition-behavior, Text Directionality, text-wrap: balance and URL.
The value of Interop is so important. It's the reason that the web has been so successful. It's the reason that we can build things that work across all devices and all browsers. It's the reason that we can build things that work for everyone and I'm grateful for the collaboration between all the browser vendors on this.
We just have to be careful to say "the web is XX% interoperable" when the data in the interop project is a percentage of the shared focus areas, not of the entire platform. The dashboard is pretty clear, the actual situation of wider interoperability has a long way to go.
Either way, we should celebrate this progress. The web is getting better.
Simon Willison: My approach to running a link blog — 🔗
Link: My approach to running a link blog
I really like Simon's approach to running a link blog and his principles really resonate with me
I always include the names of the people who created the content I am linking to, if I can figure that out. Credit is really important, and it’s also useful for myself because I can later search for someone’s name and find other interesting things they have created that I linked to in the past. If I’ve linked to someone’s work three or more times I also try to notice and upgrade them to a dedicated tag.
Lifting people up is something that I've always valued (and valued when folks did it on my content). I probably lost my way at the start of my DevRel career - parts of the DevRel job ladder include being Industry influential. I took that to mean being an expert in web development and while I think I'm reasonable and I've built a great team, I love seeing other people succeed and I love sharing their work.
I try to add something extra. My goal with any link blog post is that if you read both my post and the source material you’ll have an enhanced experience over if you read just the source material itself.
This was actually something I struggled with in my first iteration of my link blog. I'm still not sure I can always provide more value than the original author but also I have a hunch that linking out of sites is a dying art.
Simon also had a bit about the technology behind his link blog:
The technology behind my link blog is probably the least interesting thing about it. It’s part of my simonwillisonblog Django application—the main model is called Blogmark and it inherits from a BaseModel defining things like tags and draft modes that are shared across my other types of content (entries and quotations).
This blog is entirely static (Hugo) and I've been butting my head up against the wall. Static is neat, but it's not enough. If you want to add Activity Pub, well you have to bend Hugo a long way. Add a link blog? Well, that's not too hard given it's structure but it also means having to make a full git-commit to the repo, and this was something that slowed me down last time.
When generating apps the spec is important
Generating web apps with AI agents like Replit is incredibly powerful, enabling rapid prototyping and deployment. My experience building tldr.express, a personalized RSS feed summarizer, highlighted the importance of a detailed specification. While initial prompts yielded impressive results, I iteratively refined the app through configuration and additional prompts to address issues like email integration, AI model selection, output formatting, spam prevention, and bot mitigation. This iterative process reinforced that while AI agents excel at rapid generation, a well-defined specification upfront is crucial for a successful outcome. Read More
User Agents Hitting My Site
Curious about who's visiting my site, I built a user-agent tracker using Vercel middleware and KV storage. It logs every request and displays a live table of user agents and hit counts, refreshing every minute. Check out the code on GitHub! Read More
Will we care about frameworks in the future?
Building apps with LLMs and agents like Replit has been incredibly productive. The generated code is often vanilla and repetitive, raising questions about the future of frameworks. While frameworks offer abstractions and accelerate development, LLMs seem to disregard these patterns, focusing on implementation. This shift in software development driven by agents may lead to a world where direct code manipulation is unnecessary. It remains to be seen if frameworks and existing architectural patterns will still be relevant in this LLM-driven future or if new patterns will emerge. Read More
20 years blogging
Wow! Just realized I've been blogging for over 20 years, starting way back in August 2004 on kinlan.co.uk with Blogger. The journey has taken me through Posterous and landed me here on paul.kinlan.me with Hugo (and maybe Jekyll at some point). Sure, there's some cringe-worthy stuff in the archives, but it's my history. And honestly, I wouldn't be where I am today without this little corner of the internet. Huge thanks to Tim Berners-Lee and everyone who's made the web what it is! Read More
Generated Web Apps
This blog post lists various web apps I've generated using Repl.it and WebSim, Read More
The disposable web
Reflecting on my journey with computers, from the C64 and Amiga 500 to the present day, I've found a renewed excitement in software development. New tools like repl.it and websim.ai empower rapid creation of full-stack, disposable web apps – software built for personal use and easily discarded. This ease of creation removes the barrier to starting projects, making the web an ideal platform for even single-user applications. It's a shift from handcrafted software to a more ephemeral approach, allowing for quicker prototyping and experimentation. Read More
I spent an evening on a fictitious web
Experimented with WebSim, a simulated web environment, creating sites like a personal blog, timezone converter, interactive globe, and a travel site. The experience was reminiscent of the early web's playful exploration and highlighted WebSim's potential for creativity and interactive experiences. Read More
Idly musing about Manifest
In this blog post, I share some findings from my exploration of HTTP Archive data. I discovered that a significant number of websites using manifest.json files are using the default configuration generated by Create React App. I plan to investigate this further and determine how prevalent default manifest files are across the web. Read More