This post wraps up the series of posts I created about applying ML to some developer tasks that are hard to do programatically. Specifically, I wanted to create a tool that would let me detect if an anchor on a page <a> was styled to look like a button or not (woot, it worked!)
You can check out the previous posts here:
Scraping images of links and buttons to train an ML model
After I trained a simple machine learning model that can detect if an image looks like a link or a button. I created a web app to help me test it using Deno, Fresh and TensorflowJS. My demo allows for dragging and dropping many images on a page and automatically classifying them.
My world has been shook. I started writing this post in March 2021 and am revisiting it today. I discussed how watching Corridor Crew inspired me to look for ways ML can improve developer experience. After researching, I identified four challenges: inferring what developers meant for the DOM, aiding with accessibility, helping with performance, and creating layouts and images. Finally, I questioned how GPT-Chat has changed my job as a DevRel lead.