I've created a simple UI for my static site and podcast creator that allows me to quickly post new content. It uses Firebase Auth, EditorJS, Octokat.js, and Zeit's Github integration. This post focuses on committing multiple files to Github using Octokat.js. The process involves getting a reference to the repo and the tip of the master branch, creating blobs for each file, creating a new tree with these blobs, and creating a commit that points to the new tree. The code handles authentication, creates blobs for images, audio (if applicable), and markdown content, and then creates the tree and commit. This setup allows me to have a serverless static CMS.
While searching for a markdown editor on webcomponents.org to simplify blog posting, I discovered a useful collection of web components by Github. I was already familiar with their but was pleasantly surprised to find such a comprehensive and easy-to-use set of elements.
This blog post describes an experiment using Google Cloud Functions to handle web push notifications for services that don't natively support them. I needed a way to process incoming webhooks from various sources like Travis CI and GitHub, transform their payloads into a consistent format for web push, and ensure the system could scale and remain isolated. Google Cloud Functions provided a serverless solution, allowing me to create separate functions for each webhook source. The front-end receives the webhook, pushes the data to a designated Pub/Sub queue, and the corresponding cloud function processes the message and publishes the transformed data to another queue for sending the web push notification. This setup allows for flexibility, scalability, and isolation, fulfilling all my initial requirements.
In a previous post, I discussed screen recording from Android. This post details how I automated the process of adding a device frame to those recordings, making them look more professional. Previously, this was a tedious manual process involving Screenflow, but now I've automated it using ffmpeg. The ffmpeg command scales the screen recording and overlays it onto a background image of a device frame. The code, available on GitHub, handles the entire process, including setting up the Android device for recording, pulling the recording, and applying the frame. While the current solution works well, I'm open to suggestions for improvement from ffmpeg experts.
This post announces the implementation of automatic deployment for this blog, powered by Jekyll (Octopress) and GitHub WebHooks. The previous workflow involved local editing, committing to GitHub, and deploying via SSH using rake deploy. The new process leverages GitHub WebHooks and a modified version of Github-Auto-Deploy to automatically pull, build, and deploy changes upon pushing to the GitHub repository, simplifying the deployment process and eliminating the need for terminal access and SSH keys.
Installing Chrome for Android directly onto an emulator isn't possible, as it's only available via the Play Store. However, you can install the Chromium Test Shell, an open-source, functional version of Chromium without Chrome's usual interface. Although it lacks features like bookmarking and sync, it supports remote debugging. Find recent builds online and install them via adb. I've even created a script to automate downloading, extracting, and installing the latest Chromium Test Shell build, available on GitHub.
I've created LeviRoutes, a client-side JavaScript routing framework inspired by Rails. It's simple, fast, and focuses solely on handling URL changes. LeviRoutes works with HTML5 History APIs, hashchange events, and even gracefully degrades for older browsers. It supports named parameters like "/:category" for dynamic routing, allowing you to treat the URL as a controller input. Check it out on GitHub!
This blog post introduces Omni Launch, a Chrome extension I built that lets you quickly launch installed web apps directly from the URL bar. Just type "go", followed by a TAB or SPACE, and then the app name. The extension searches your installed apps and provides suggestions as you type. I also explained the development process, which only took about 20 minutes, including setting up the manifest, hooking up event listeners for omnibox input changes and selections, and using the Management API to fetch and launch apps. The code is available on GitHub.
In a previous post, I demonstrated how to create a custom App Launcher using the Management API and Override Pages framework. However, this approach didn't allow users to retain their custom NTP and utilize webstore apps concurrently. Thus, I developed "Quick Launch," a browser action extension that addresses this issue. The extension reuses much of the NTP tutorial's code but utilizes the 'browser_action' in the manifest, enabling a popup.html to display installed apps upon clicking the extension's icon. The popup.html dynamically generates a list of installed apps using the Management API and displays them with their icons. The source code is available on GitHub.
I've created Appmator, a tool to help developers get their web apps into the Chrome Web Store quickly. Just enter your app's URL, and Appmator generates a zip file ready for upload. Appmator is available in the Chrome Web Store and is built using some cool technologies like webfonts, Modernizr, jszip, and more. Source code is available on GitHub.
Back in February, I presented at TwitterDevNest about getting data in and out of Buzz. The slides are now available on SlideShare. I covered WebFinger, OpenID, Buzz feeds, Pubsubhubbub, and Salmon. I also promised to open-source the demo code, which I'll be pushing to Github later today (covering most of the topics except Salmon).