I integrated the Squoosh CLI into my web app to optimize images. Although Squoosh offers a great CLI, I needed its functionality within my app. Leveraging my experience with FFMPEG in web apps, I adapted the Squoosh CLI code, replacing Node.js dependencies with web APIs. Now, I can call Squoosh's 'run' method directly in my app to resize and compress images. This unofficial solution works for now, but a dedicated browser API would be ideal for broader integration in CMS platforms, performance analysis tools, and other web applications.
I combined FFMPEG.js, a tool compiled with asm.js for video editing in web apps, with Comlink, a library that simplifies web worker interactions. This integration, along with my experiment of exporting FFMPEG to Web Assembly, allows for cleaner video encoding off the main thread. The provided code snippets demonstrate the simplicity of using Comlink to expose the ffmpeg interface within a web worker and then access it from the main thread as a proxy, offering a neat solution for asynchronous video processing.
I created a quick JavaScript function to convert seconds to an HH:MM:SS timecode format for use with tools like FFMPEG. The function takes in the total seconds and returns a formatted string.
I've created a Progressive Web App using FFMPEG.js that applies device frames to Android screen recordings. I've also streamlined the ffmpeg.js build process. This opens up exciting possibilities for building powerful web apps for audio and video manipulation. Many existing web-based video tools are outdated and ad-heavy. I'm interested in creating smaller, focused PWAs, each performing a specific task efficiently. I'm planning to build several apps based on this, such as video trimming, format conversion, watermarking, resizing, and splicing. The codebase from my Device Frames project provides a good starting point. I invite others to collaborate on this effort.
Building ffmpeg.js, a tool I used in my project Device Frame, can be tricky if you need custom filters and encoders. This post details the steps I took to successfully build it on Ubuntu, including installing dependencies, cloning the repo, setting up Emscripten, and running the build. I also included solutions to some common errors encountered during the process.
In a previous post, I discussed screen recording from Android. This post details how I automated the process of adding a device frame to those recordings, making them look more professional. Previously, this was a tedious manual process involving Screenflow, but now I've automated it using ffmpeg. The ffmpeg command scales the screen recording and overlays it onto a background image of a device frame. The code, available on GitHub, handles the entire process, including setting up the Android device for recording, pulling the recording, and applying the frame. While the current solution works well, I'm open to suggestions for improvement from ffmpeg experts.