I've been working on creating a simple screen recording software, and in this post, I share how I finally figured out how to record both microphone and desktop audio simultaneously. Previously, I could only record one or the other. The key is to use the Web Audio API, specifically createMediaStreamSource and createMediaStreamDestination, to combine the two audio streams into one. This combined stream can then be fed into the MediaRecorder API. You can check out the full code on my Glitch project and see a demo, too!
In this post, I'm sharing a screencast demonstrating how I built a web-based webcam and screen recorder using the navigator.getDisplayMedia API. This allows users to grant access to their screen content for recording. The code provided captures both screen and audio, combines them into a single video stream, and allows downloading the recorded video as a webm file. This is a very early stage, and the current output is raw. The ultimate goal is to build a full-fledged video editor in the browser, but for now, this screencast shows the initial steps in capturing video and audio.
I'm embarking on a project to build a web-based video editor! The goal is to create a tool that simplifies video creation and editing entirely within the browser. Think Screenflow, but accessible to everyone directly on the web. This project is driven by my own needs for creating device demos, screencasts, and other videos. I've already made some progress (check out the demo!), but there's a lot more to do. I'll be exploring existing web technologies to record audio/video, manipulate content (watermarks, filters, overlays), and output in various formats. This isn't about building a massive commercial product, but rather about understanding what's possible and empowering others to create great videos using the open web.
In a previous post, I discussed screen recording from Android. This post details how I automated the process of adding a device frame to those recordings, making them look more professional. Previously, this was a tedious manual process involving Screenflow, but now I've automated it using ffmpeg. The ffmpeg command scales the screen recording and overlays it onto a background image of a device frame. The code, available on GitHub, handles the entire process, including setting up the Android device for recording, pulling the recording, and applying the frame. While the current solution works well, I'm open to suggestions for improvement from ffmpeg experts.
For our Google I/O 2013 talk, we needed a way to seamlessly showcase live demos on an Android device. Projector switching was clunky, so we pre-recorded the demos for smoother transitions. This post details our process. We used a Blackmagic Intensity Shuttle to capture high-quality HDMI output from a Galaxy Nexus (which thankfully doesn't enforce HDCP on HDMI). This setup, along with the Orientation Control app to maintain portrait mode, allowed us to create polished, in-line video demos. While this solution isn't cheap, the quality and seamless integration were worth the investment.