Curious about who's visiting my site, I built a user-agent tracker using Vercel middleware and KV storage. It logs every request and displays a live table of user agents and hit counts, refreshing every minute. Check out the code on GitHub!
This blog post lists various web apps I've generated using Repl.it and WebSim, along with their code. Repl.it apps include tools like an image analyzer, time zone tracker, and a blood pressure tracker. WebSim creations include a 3D globe and gravity simulators. I discuss my preference for Postgres over sqlite, especially with Repl.it's tendency to overwrite sqlite databases upon deployment.
I'm exploring a new way to build reactive applications using an 'Agents' API. Inspired by Preact Signals and my previous reactive-prompt project, this toolkit uses Chrome's prompt API. Each Agent has a persona, task, and context, reacting to input changes. You can chain Agents, passing data between them. I've created different Agent types like a 'Human' Agent representing user input and a 'ToolCaller' that can execute JavaScript functions based on context. This experiment explores data-flow-driven LLM applications, similar to Breadboard, and leverages Preact Signals for managing this flow.
I created a simple vector database called "Vector IDB" that runs directly in the browser using IndexedDB. It's designed to store and query JSON documents with vector embeddings, similar to Pinecone, but implemented locally. The API is basic with insert, update, delete, and query functions. While it lacks optimizations like pre-filtering and advanced indexing found in dedicated vector databases, it provides a starting point for experimenting with vector search in the browser without relying on external services. The project was a fun way to learn about vector databases and their use with embeddings from APIs like OpenAI.
This blog post introduces a bookmarklet utilizing the EyeDropper API for quickly grabbing color information in Chromium-based desktop browsers. The bookmarklet simplifies color selection by opening the eyedropper tool and returning the chosen color's sRGBHex value in an alert box. A link to a related blog post about creating a similar Chrome extension is also included.
I created a Lighthouse audit that uses machine learning to detect if an anchor tag looks like a button. This involved training a TensorflowJS model, building a custom Lighthouse gatherer to capture high-resolution screenshots, and processing those screenshots to identify anchors styled as buttons. The audit highlights these anchors in the Lighthouse report. The code for the scraper, web app, and Lighthouse audit are available on GitHub. While there are edge cases, this project demonstrates the potential of using ML for visual inspection tasks in web development.
I built a web app using Deno, Fresh, and TensorFlowJS to classify images as links or buttons. The app uses a pre-trained ML model and allows users to drag and drop multiple images for classification. I encountered challenges with server-side rendering and islands, specifically with integrating a file-drop web component. I also documented the process of integrating the TensorFlowJS model, including model loading and prediction handling. The code is available on GitHub.
I trained a machine learning model to differentiate between buttons and links on web pages. Using a dataset of ~3000 button images and ~4000 link images, I trained a convolutional neural network (CNN) with added noise for better generalization. Preprocessing included grayscale conversion, dataset diversification with multilingual sites, and image compression. The model performed well in initial tests, correctly classifying button-like and link-like elements. Next, I'll build a web app for easier testing and a Lighthouse audit for website analysis.
Custom URL schemes can enhance web app functionality by handling specific URLs, but detecting scheme support is tricky. Several methods exist, including click handlers, navigation handlers (Blink), and server-side redirects with meta refresh. While the server-side approach offers the most robust solution, it introduces complexity. A key challenge is the limited user understanding of custom schemes, leading to a preference for standard HTTPS URLs. This post explores a common pattern for custom scheme usage, involving detecting navigation failures and presenting alternative UI. The pattern addresses the issue of handling custom schemes like web+follow for Mastodon, aiming to improve user experience. While custom schemes are valuable developer tools, user preference for HTTPS URLs persists. Despite this, custom schemes empower developers to guide users to preferred apps or sites while gracefully handling cases where no suitable option exists. This approach also opens possibilities for other applications, like rebuilding web intents.
I'm so excited by the renewed interest in web development sparked by Wordle! It's a simple, fun game that highlights the power of the web. It's accessible, fast, user-friendly, and has inspired countless developers to create their own versions and variations. This post celebrates Wordle's impact, lists various Wordle-inspired projects (including different language versions, framework implementations, and even tools), and encourages readers to share their own discoveries.
With Chrome nearing version 100, there's a concern about whether user agent checks relying on "Chrome 10" will break. Analysis of HTTP Archive data suggests this is unlikely, with most instances of "Chrome 10" in JavaScript code being comments or workarounds rather than version checks. While client-side checks seem safe, server-side checks remain a concern, highlighting the need for User Agent Client Hints. If you know of tools that might be affected by the Chrome 100 user agent change, please get in touch.
As a data-driven manager, I needed a way to track the performance of our team's numerous NPM packages. Frustrated by the lack of an obvious API, I discovered a hidden gem in the NPM registry documentation. Using this, I created a Google Sheet with custom functions to pull download stats directly. The sheet allows you to track both scoped and non-scoped packages, view data in a table or column format, and easily create charts to visualize trends. Check out the linked sheet and accompanying code to build your own NPM downloads dashboard!
I integrated the Squoosh CLI into my web app to optimize images. Although Squoosh offers a great CLI, I needed its functionality within my app. Leveraging my experience with FFMPEG in web apps, I adapted the Squoosh CLI code, replacing Node.js dependencies with web APIs. Now, I can call Squoosh's 'run' method directly in my app to resize and compress images. This unofficial solution works for now, but a dedicated browser API would be ideal for broader integration in CMS platforms, performance analysis tools, and other web applications.
This post explores the ever-evolving landscape of web technologies and their compatibility across different browsers and platforms. It examines tools like iwanttouse.com and other resources to determine what web features can be safely used in 2021. The discussion will also include insights from industry experts like Jason and Mathias Bynens regarding JavaScript best practices. Finally, the post proposes the innovative concept of adopting a "quirks mode" but tailored to specific years, which could offer greater control over web development.
akachan.app is a Single Page Application (SPA) built for instant loading without using JavaScript on the client-side. Instead, all JavaScript resides in a Service Worker and is also executable on the server. This approach offers significant performance benefits but introduces challenges regarding third-party sign-in integration and data synchronization.
I created a bookmarklet to easily download all images from my daughter's nursery school portal, which doesn't allow direct downloads. It uses the File System API to let the user choose a directory and save all images there. The bookmarklet grabs all images, fetches them sequentially to avoid overloading the server, and saves them to the chosen directory using file handles and writer streams. Now I can easily preserve these memories!
The MDN Browser Compatibility Report 2020 surveyed web developers to identify pain points in cross-browser compatibility. Layout and styling issues, especially with Flexbox and Grid, topped the list, along with challenges related to viewport units, scrolling on mobile, and achieving consistent form styling. Internet Explorer and Safari were frequently cited as problematic browsers. While JavaScript was initially flagged as a concern, interviews revealed that transpilers like Babel largely mitigate core language issues, shifting the focus to browser APIs and their inconsistencies. The report highlighted ongoing efforts to improve compatibility, including fixes for Flexbox and Grid in Chromium and WebKit, the transition to Chromium-based Edge, and a commitment to enhancing MDN's browser compatibility data.
While building a simple CRUD PWA using only service worker JavaScript and relying on Form submissions for data handling, I encountered an issue with Safari not supporting request.formData(). I created a small shim to work around this by parsing the x-www-form-urlencoded request data as a query string and using URLSearchParams to process data similarly to a FormData object. This approach isn't suitable for multipart forms and requires a different solution.
This blog post introduces a simple bookmarklet that provides quick access to a webpage's JavaScript console logs, warnings, and errors directly on desktop and mobile devices. It eliminates the need for connecting to Chrome DevTools, especially useful for quick debugging on mobile. The bookmarklet creates a small, expandable element at the bottom of the page that displays console outputs and keeps a running tally. It intercepts calls to console.log, console.warn, and console.error, displaying the messages in the created element while preserving their appearance in actual DevTools. While not a full DevTools replacement, it's a handy tool for quick insights and debugging on the go.
I created a simple bookmarklet for quickly enabling Picture-in-Picture mode for videos, even on sites that disable it. Drag the "Quick PIP" bookmarklet link to your bookmarks bar. When clicked, it activates PIP for the first actively playing video on the page or in any same-origin iframe. The bookmarklet's code is concise and avoids polluting the global scope. It efficiently finds the first playing video and requests Picture-in-Picture mode.
This post provides a quick way to retrieve and filter the list of Blink components from a JSON file hosted by Chromium. The provided JavaScript snippets demonstrate how to fetch and process the component list, filtering for entries that begin with "Bli". The next step is figuring out how to programmatically get a list of OWNERS.
Just saw that Scroll To Text Fragment is launching in Chrome 81! This feature lets you link to specific text within a page, which is awesome. I created a bookmarklet that grabs your selected text and generates a link using the new :~:text= fragment identifier. Drag the "Find in page" link to your bookmarks bar to try it out. The bookmarklet currently selects whole words, but I'm planning on adding some logic to handle partial word selections better. You can also easily modify the bookmarklet to copy the generated link to the clipboard instead of opening a new window.
I've created a lighter fork of the SimpleImage tool for Editor.js! It addresses a couple of issues I had with the original. First, it now uses blob URLs instead of base64, which saves memory. Second, it adds the ability to select an image file directly, rather than only drag-and-drop. Check out the fork on GitHub!
I received a very cool gift this year: a Web USB-powered airhorn! It uses an Arduino Uno and some very neat code. The device waits for approval, configures, and reads continuously from the device for an "ON" signal to trigger the horn. Although the Arduino code isn't available yet, the project is inspired by the WebUSB examples for Arduino and should be released soon. Check out the post and the demo.
I've created Puppeteer Go, a small JavaScript library to simplify the process of creating CLI utilities with Puppeteer. It handles the boilerplate of launching the browser, opening a tab, navigating to a URL, performing a specified action, and cleaning up. This post demonstrates its usage by taking multiple screenshots of elements on a page, inspired by Ire Aderinokun's work. Examples include capturing screenshots of h1 elements on my blog and feature blocks on caniuse.com.
I've created a simple video plugin for EditorJS, called simple-video, to easily embed videos. It's based on the simple-image plugin and allows for autoplay, mute, and control options. Check out the npm package and GitHub repo for more details!
I created a micro-service for generating friendly project names using Zeit's serverless functions and a dictionary of safe words. It's deployed and available at https://friendly-project-name.kinlan.now.sh/. You can use the API endpoint (/api/names) to get random names, specify the number of names with the 'count' parameter, and even customize the separator character. The project is inspired by Glitch's project naming and aims to simplify project creation on Zeit.
This blog post discusses how to integrate Webmentions into a statically generated website built with Hugo, hosted on Zeit. Static sites lack dynamic features like comments, often relying on third-party solutions. This post explores using Webmentions as a decentralized alternative to services like Disqus. It leverages webmention.io as a hub to handle incoming mentions and pingbacks, validating the source and parsing page content. The integration process involves adding link tags to HTML, incorporating the webmention.io API into the build process, and efficiently mapping mention data to individual files for Hugo templates. Finally, a cron job triggers regular site rebuilds via Zeit's deployment API, ensuring timely updates with new mentions.
I explored the concept of "magic iframes" and using adoptNode to move iframes between windows. Initially, I thought I'd found a way to preserve iframe state during the move. However, after discussing with Jake Archibald, it turns out that appendChild already handles node adoption, making adoptNode redundant. Furthermore, moving iframes causes them to reload, negating the perceived benefit. While moving DOM elements between documents is still interesting, the original premise for iframes doesn't hold. The post includes a demo and discusses the potential of the <portal> API.
I've added Webmention support to my blog! I'm excited about Webmentions because they allow decentralized commenting and reactions, unlike Disqus which I'm looking to remove. Sending webmentions involves two parts: the sender and the receiver. I used Remy Sharp's webmention.app tool to simplify the sending process. Integrating it into my Zeit/Hugo blog was super easy - I just installed the package and added a call to the CLI in my build script. Now, whenever I publish a post, it automatically pings any URLs I've linked to.
I've created a simple UI for my static site and podcast creator that allows me to quickly post new content. It uses Firebase Auth, EditorJS, Octokat.js, and Zeit's Github integration. This post focuses on committing multiple files to Github using Octokat.js. The process involves getting a reference to the repo and the tip of the master branch, creating blobs for each file, creating a new tree with these blobs, and creating a commit that points to the new tree. The code handles authentication, creates blobs for images, audio (if applicable), and markdown content, and then creates the tree and commit. This setup allows me to have a serverless static CMS.
I've been working on creating a simple screen recording software, and in this post, I share how I finally figured out how to record both microphone and desktop audio simultaneously. Previously, I could only record one or the other. The key is to use the Web Audio API, specifically createMediaStreamSource and createMediaStreamDestination, to combine the two audio streams into one. This combined stream can then be fed into the MediaRecorder API. You can check out the full code on my Glitch project and see a demo, too!
I built a Progressive Web App (PWA) that extracts text from images shared to it. It uses the Share Target API to receive images, the Shape Detection API's TextDetector to analyze them, and EXIF-Js to handle image rotation issues. While it's a handy tool, it currently suffers from cross-browser and cross-version compatibility problems due to the lumpy nature of the web platform. The code snippets highlight key implementation details, like manifest setup, service worker handling, text extraction, and image rotation.
I've migrated my Hugo blog's editor to Editor.js. It's a block-based editor, unlike classic editors, offering more flexibility and a Medium-like experience. Although I faced some challenges adapting the ES5 code from the NPM distribution (compared to the ES Modules examples), building the UI was relatively straightforward. Check out Editor.js for more details.
I created Quick LogCat, a web tool for debugging Android devices without needing adb installed. It uses WebADB.js and the WebUSB API to connect to your device and display logcat output. The tool is useful for on-the-go debugging. It's powerful but also highlights the potential security implications of granting web access to USB devices. This technology opens up exciting possibilities like firmware updates and app sideloading via web interfaces. I'm curious to see what others build with WebUSB and adb access.
I've just added the pinch-zoom-element web component to my photography blog. It's a tiny (~3kb), dependency-free custom element that allows for easy pinch-zooming on any HTML element. Check out the implementation on my blog (touch-enabled device/trackpad needed for testing) and see how simple it is to integrate! This element was crucial for the Squoosh app and perfectly exemplifies the power of web components for clean, reusable UI. I hope to see wider adoption of elements like these, especially for common use-cases like image zooming on e-commerce sites.
In this post, I'm sharing a screencast demonstrating how I built a web-based webcam and screen recorder using the navigator.getDisplayMedia API. This allows users to grant access to their screen content for recording. The code provided captures both screen and audio, combines them into a single video stream, and allows downloading the recorded video as a webm file. This is a very early stage, and the current output is raw. The ultimate goal is to build a full-fledged video editor in the browser, but for now, this screencast shows the initial steps in capturing video and audio.
While building a web-based video editor, I encountered an issue with handling multiple video tracks in a MediaStream. I wanted to switch between different video sources (desktop and webcam) on a single video element without interrupting the MediaRecorder. Attempting to do this by toggling the 'selected' property on the videoTracks object of the video element failed. The videoTracks array only contains the first video track added to the MediaStream, even though the stream itself contains both tracks. This prevents seamless switching between sources within the video element.
I'm embarking on a project to build a web-based video editor! The goal is to create a tool that simplifies video creation and editing entirely within the browser. Think Screenflow, but accessible to everyone directly on the web. This project is driven by my own needs for creating device demos, screencasts, and other videos. I've already made some progress (check out the demo!), but there's a lot more to do. I'll be exploring existing web technologies to record audio/video, manipulate content (watermarks, filters, overlays), and output in various formats. This isn't about building a massive commercial product, but rather about understanding what's possible and empowering others to create great videos using the open web.
In this post, I share how I used Comlink to simplify the worker logic in my QRSnapper project, which aims to achieve 60fps UI and near-instant QR code detection using getUserMedia. The code now utilizes the Barcode Detection API within a Web Worker for efficient QR code scanning. If the native API is available, the code uses it; otherwise, it falls back to a polyfill. This approach keeps the UI responsive while offloading the processing to a separate thread, significantly improving performance.
I combined FFMPEG.js, a tool compiled with asm.js for video editing in web apps, with Comlink, a library that simplifies web worker interactions. This integration, along with my experiment of exporting FFMPEG to Web Assembly, allows for cleaner video encoding off the main thread. The provided code snippets demonstrate the simplicity of using Comlink to expose the ffmpeg interface within a web worker and then access it from the main thread as a proxy, offering a neat solution for asynchronous video processing.
Hugo, by default, doesn't serve .mjs files with the correct MIME type, which is necessary for using ES modules. However, starting with v0.43, you can configure Hugo to serve .mjs files correctly by adding the 'mjs' suffix to the 'text/javascript' media type in your config file. This allows for proper local testing of ES modules, although hosting considerations might differ.
In this post, I explore importing npm modules into web projects using ES6 modules. I needed a quick way to use the 'get-urls' npm module in my ES6 project without resorting to CommonJS bundling. My solution involves creating a wrapper file to import the module, using Rollup to handle Node globals and builtins, converting to ES modules via the CommonJS plugin, and finally, including the bundled file. This highlights a larger issue: much of the Node ecosystem, though not inherently Node-specific, is tightly coupled with it through CommonJS and APIs like 'Buffer' and the old 'URL.' Transitioning to ubiquitous ES modules will require effort and potentially be painful. Until the ecosystem adapts, we'll rely on conversion tools and bundlers for cross-platform code sharing. While using '.mjs' as a standard extension is promising, the lack of infrastructure recognizing it as 'text/javascript' necessitates further server-side configuration, which adds complexity.
In this post, I share a Rollup configuration I created to easily import npm modules into a front-end project using ES6 modules. I needed a way to use the 'get-urls' npm package in my ES6 project without resorting to CommonJS. My solution involves creating a wrapper file, using Rollup to bundle it with necessary plugins (node-resolve, commonjs, node-builtins, node-globals, closure-compiler-js), and then importing the resulting bundle into my HTML using a <script type="module"> tag. While the resulting bundle size is larger than ideal, this method allows me to use npm modules directly within my ES6 code.
I recently had the pleasure of attending and thoroughly enjoying a live stream hosted by This Dot, featuring browser representatives from Brave, Beaker, Edge, Chrome, and Mozilla. They discussed recent updates and the future direction of browsers. Key highlights included Beaker Browser's innovative work on the distributed web, Edge's significant updates like Service Worker support and WebP integration, Mozilla's focus on Web Assembly, and Brave's progress with BAT. My team at Google is focused on Discovery, Speed & Reliability, UI Responsiveness, UX, Security, and Privacy. We're working to improve how developers build sites for headless services, optimizing for speed and reliability using metrics like TTI and FID, improving UI responsiveness with techniques like FLIP and Houdini, prioritizing user experience, and addressing security and privacy concerns in light of Intelligent Tracking Prevention and GDPR. It was also exciting to see a shared interest in bringing back Web Intents.
PWACompat is a JavaScript library that helps web developers make their Progressive Web Apps (PWAs) compatible across different browsers. It takes the existing Web App Manifest and generates the necessary meta and link tags for features like icons, favicons, startup mode, and colors, ensuring a consistent experience across browsers, even those with less complete PWA support, like Safari on iOS. PWACompat simplifies cross-browser compatibility for PWAs, handling things like splash screens and other add-to-homescreen features, making it a valuable tool for PWA developers.
This post discusses the importance of web performance and the role of different stakeholders in prioritizing it. It highlights the trend of increasing JavaScript usage, impacting page load times, especially on less powerful devices or slower networks. The author argues that while Google's intervention could be impactful, the long-term solution lies in businesses recognizing the positive correlation between web performance and conversion rates, making it a business priority rather than an afterthought. Tools and guidance are available to help, but ultimately, a shift in industry mindset is essential for sustained improvement.
Emscripten is a great tool for compiling to WebAssembly, but it can introduce unnecessary overhead. It's important to minimize the runtime size for any language compiled to WebAssembly. This post explores how to use Emscripten with a minimal runtime, avoiding excessive magic and focusing on efficiency.
I've been exploring solutions to connect web apps and overcome the limitations of isolated experiences. Web Intents was a good start, but ultimately fell short. The Share API helps, but we need a more general solution for IPC and service discovery. My latest experiment builds on the Tasklets API and Comlink, allowing seamless communication between windows and web workers. It simplifies the complex postMessage API and makes it easy to expose and consume APIs across different contexts. I've created a service discovery mechanism where a 'middleman' site keeps track of available services. Clients can request services based on criteria, and the middleman facilitates the connection. Once connected, the client and service communicate directly, bypassing the middleman. This approach simplifies the developer experience and makes it much easier to build interconnected web experiences. Check out the demos and let me know your thoughts!
I'm excited to share the latest addition to the Shape Detection API: the Text Detection API! This API allows you to detect text within images in real-time, right in the browser. It's still experimental and currently works on Chrome Canary for Android, but it opens up amazing possibilities. Imagine real-time translation, assistive technologies for parsing image content, or even grabbing URLs from slides at conferences. I've built a demo where the API detects text, draws a box around it, and reads it aloud when clicked. Check out the code and demo to experiment yourself. I can't wait to see what you build with this!
I'm exploring the best way to load web components, focusing on how to include styles and templates without creating uncontrolled blocking requests. I've experimented with using a single JavaScript file that encapsulates everything, including styles and a dynamically created template element. This approach avoids external requests but raises questions about extensibility and best practices. Should we revive HTML imports, embrace ES modules, or find a common model for handling templates and styles? Is inlining templates a reasonable solution? I'm looking for community input on how to balance performance and developer experience when deploying web components.
This blog post explores the potential of WebAssembly (Wasm) for full-stack development, allowing code sharing between client and server. I discuss how Wasm could enable progressive enhancement for web APIs like the Shape Detection API. Using this API as an example, I illustrate how a C-binding library like OpenCV, compiled to Wasm, could be used on both client and server to provide consistent functionality regardless of native browser support. This approach involves creating a wrapper around OpenCV and the target web API to bridge the gap between them. I express my excitement about Wasm's potential to simplify deployment and maintenance by enabling the use of a single binary across different environments.
I created a quick JavaScript function to convert seconds to an HH:MM:SS timecode format for use with tools like FFMPEG. The function takes in the total seconds and returns a formatted string.
In this post, I share a simple client-side JavaScript PubSub system I built. Motivated by the Not-Invented-Here syndrome and the desire for independent UI components, I created a lightweight event manager called EventManager. It allows components to communicate without direct dependencies by publishing and subscribing to named events. While similar to tools like Redux, this approach avoids separate state management, leveraging the browser's existing state. The code is available on GitHub.
I'm excited to share that barcode detection is now available in Chrome Canary via the Shape Detection API! This feature, along with QR code detection, offers a standardized way to access device hardware and bridge the physical and digital worlds. It's especially useful for mobile, eliminating the need for special apps just to scan barcodes. The API is simple: use BarcodeDetector.detect() with an image input to get a promise resolving to a list of barcodes. I've already integrated this into my QR Code Scanner App, and the ability to use it in worker threads is a huge bonus. It's a very promising addition to the web platform and I'm looking forward to seeing what people build with it!
In this post, I discuss a new routing framework I've added to my service worker. It's based on my older LeviRoutes project and allows me to handle different URL patterns separately. This is much cleaner than a complex onfetch handler, and lets me easily manage things like requests to analytics services and my caching strategy. While sw-toolbox is a great alternative, I enjoyed the flexibility and learning experience of building my own. I encourage you to check out the code and consider routing in your own service workers.
In this follow-up post, I've revised my blog's Service Worker and caching strategy to address previous issues, particularly the Firefox incompatibility due to the use of waitUntil and a misunderstanding of cache.put. The updated strategy now correctly fetches from the network, caches the result, and serves content from the cache, falling back to the network request if not found. The code has also been improved for readability and reliability.
I'm excited about the new experimental Shape Detection API in Chrome Canary! It provides a simple JavaScript API for face and barcode detection, leveraging underlying hardware for performance. This opens up new possibilities for web apps, from faster face detection and profile picture cropping to real-time tagging and optimized facial recognition. While currently only available in Chrome for Android (desktop support coming soon), I've shared a demo on JSBin. I also discuss strategies for progressive enhancement to ensure broader compatibility, including server-side detection, client-side JavaScript libraries, and the potential of Web Assembly. This API has the potential to revolutionize object detection performance on the web, and I'm particularly keen to see its impact on barcode scanning apps like my own QR Snapper.
Measuring the impact of autofill is crucial. In WebKit/Blink browsers, the -webkit-autofill pseudo-class helps track autofill, but it's not supported in Firefox. I've found a workaround in Firefox using the input event, checking for the absence of keyboard interaction. Ideally, a standardized :autofill pseudo-class and a dedicated onautocomplete event would simplify this process, allowing developers to measure and manage autofill effectively.
In my quest to understand how to detect when a field has been autofilled, I needed a way to monitor the events of an element that doesn't exist yet. I created a helper function, waitForElement, that uses MutationObserver to wait for an element with a specific ID to be added to the DOM. Once the element is added, the promise resolves and returns the element. This, combined with my previously created monitorEvents function, allows me to start logging events on dynamically created elements, getting me closer to solving the autofill detection puzzle.
I needed to figure out how to monitor events on an element (like when a field is autofilled) and Chrome DevTools has a monitorEvents function, but Firefox doesn't. Since I couldn't find an equivalent in Firefox DevTools, I created my own JavaScript function that iterates through an element's properties, finds event listeners (e.g., "onclick"), extracts the event name (e.g., "click"), and attaches a console logger to each event. The code snippet and a corresponding gist are provided.
I'm excited to share a new, simple API for sharing on the web called navigator.share! It's available in Chrome Dev Channel on Android and allows websites to connect with native apps for sharing. This is a step towards a better inter-app communication system, simplifying sharing and potentially extending to other app interactions. You can try it now on my blog by clicking the share button. I've updated my blog to use it, falling back to my existing solution if the API isn't available. Check out the ChromeStatus page and other linked resources for more information and give us feedback!
Tired of typing in usernames and passwords? So are your users. Autofill helps, but the Credential Management API gives developers more control. It lets you securely store and retrieve user credentials, simplifying logins with just a couple of taps. This Chrome-only API allows access to a PasswordCredential object, rather than raw passwords. It works with other improvements like proper autofill fields (email, username, new-password, current-password) and offers a potential future where landing and login pages are obsolete. Imagine a web where users stay logged in seamlessly, only re-authenticating when necessary. This post covers how to implement the API, including a demo and sample code. Plus, explore how it might combine with the Web Payment Request API to streamline e-commerce.
This blog post discusses the implementation of a Service Worker for my blog, with a focus on the caching strategy. I've chosen a "Stale While Revalidate" approach, which prioritizes speed and resilience. The Service Worker intercepts network requests and serves cached content if available, while simultaneously fetching updated content in the background. This ensures the latest version is available after one refresh. The post also details the requirements considered when choosing this strategy, including development simplicity and compatibility with the existing hosting setup (Hugo and NGINX). The provided JavaScript code snippet demonstrates the Service Worker implementation.
I explored building a progressively enhanced sharing web component using Shadow DOM. My focus was on URL visibility and manipulation within web apps, even when they behave like native applications. The component is designed to be customizable and work across browsers, with or without JavaScript, by leveraging existing elements like anchor tags. It uses a Twitter intent as a fallback sharing mechanism when Web Components aren't supported. I'm excited about the potential of web components, even without widespread custom element support.
This post explores how to use Android Intents to detect if a native app is installed. This technique is useful for web apps that also have a native app version, especially for managing push notifications. It allows developers to seamlessly redirect users to the app if it's installed or fall back to the web experience. The method involves creating a special intent URL that opens the app if present, or redirects to a specific URL with a hash fragment. By monitoring the hash change in the browser, the web app can detect if the app launch failed and proceed with web-based push notification registration. While helpful, this approach highlights the complexity of managing push notifications across web and native apps, reinforcing the argument for web-only solutions.
This post explores the concept of "reverse polyfilling" – creating polyfills for features removed from web browsers. I argue that as the web platform matures, pruning less-used features is necessary for performance and maintainability. While this might seem disruptive to developers, reverse polyfills, combined with the principles of the Extensible Web, offer a solution. By focusing on core primitives and building higher-level features with JavaScript (potentially leveraging technologies like WebAssembly), we can create a more adaptable and efficient web platform. Reverse polyfills will become essential for both removing legacy features and implementing new ones, contributing to a progressively enhanced web experience.
I've been working on making html5rocks.com more mobile-friendly, focusing on reducing "Time to first read". The main culprit was the Table of Contents (ToC). My initial experiment with an offscreen ToC using CSS had cross-browser inconsistencies. Now, using existing JS, the ToC is fixed to the footer and toggled into view. I'm still exploring pure CSS solutions. Initially, I favored a small scrollable ToC at the bottom, but Paul Lewis suggested a full-screen ToC, which proved better. It minimizes distractions and clutter, provides more screen space for a readable, easily navigable ToC with larger touch targets and subtle hierarchy, even for long lists. The before/after screenshots demonstrate the improvement.
I wanted a traffic light system on iwanttouse.com to visually represent feature support. Initially, I used simple CSS classes like .good (green), .ok (amber), and .bad (red), but this required clunky conditional logic to handle the color transitions based on percentage support. Paul Lewis suggested using HSL which allows for smooth transitions between red, amber, and green by adjusting the Hue value (0-359). Now, I can dynamically set the color using element.styles.color = \"hsla(\" + ((percentage / 100) * 90) + \", 50%, 50%)\"; which maps the percentage support to a Hue value between 0 (red) and 90 (green).
This post discusses how to use window.name for cross-domain communication between windows/iframes, especially before the onload event. It explains a simple method using window.open to set the name and retrieve it in the opened window. It also addresses IE compatibility issues by base64 encoding/decoding the data and provides code snippets for both encoding and decoding, handling IE's character restrictions and lack of built-in base64 functions.
WebMessaging (postMessage) seems simple but has quirks. Different browsers handle data differently (structured clones vs. strings). The biggest problem is sending messages to a newly opened window/iframe. You can't just send a message immediately; you have to wait for the window to load and signal back. This adds complexity, requiring the new window to postMessage back to the opener, which then sends the actual data. A workaround involves passing data via window.name, but this has security implications as the data's origin is uncertain and the name persists, potentially exposing data.
I found and fixed a bug in WebKit! My LeviRoutes framework needed to simulate 'onpopstate' events for testing, but WebKit's createEvent(\"PopStateEvent\") was broken. After some digging in the WebKit source code, I found the problem in Document.cpp, added the missing PopStateEvent handling, created a test case, and submitted a patch. It got reviewed and accepted! Now my fix is part of WebKit, used by tons of people, and I can finally get back to LeviRoutes.
At Google I/O 2011, we showcased a mobile web app. Many asked about its development timeline. Work began on March 3rd, with core coding starting on March 25th. While the calendar time was just over a month, the effort was spread across four engineers, each dedicating about 20% of their time to their respective UI using the FormFactorJS framework. This setup facilitated isolated development, with the flexibility to inject custom code per form factor and a common base controller in controller.js. We also developed and leveraged two frameworks, LeviRoutes and FormfactorJS, to efficiently consolidate common logic and specialize the controller according to form factors.
During my "Mobile Web Development: From Zero to Hero" talk at Google I/O, a question came up about client-side data storage now that WebSQL is deprecated. While IndexedDB is on the horizon, what are developers using today? I shared my preference for Lawnchair, a simple key-value store abstraction that's easy to use and perfect for many situations. While I didn't use it in the IO Reader app due to late-stage project constraints and the sufficiency of localStorage, I generally prefer using such libraries. I'm interested in hearing from others. What data storage wrappers or techniques do you prefer when building web apps?
Badgemator is a web app that simplifies the process of creating badges for your Chrome Web Store listing. It generates a single script tag that you can embed on your website. This tag displays a badge to Chrome users who haven't installed your app, encouraging them to visit your store listing. Badgemator automatically fetches your logo and other details, and you can customize the badge's appearance with CSS. The project is open source, and contributions are welcome!
I've created LeviRoutes, a client-side JavaScript routing framework inspired by Rails. It's simple, fast, and focuses solely on handling URL changes. LeviRoutes works with HTML5 History APIs, hashchange events, and even gracefully degrades for older browsers. It supports named parameters like "/:category" for dynamic routing, allowing you to treat the URL as a controller input. Check it out on GitHub!
Hey everyone, I've been playing with the dev channel of Chrome and discovered something huge: background pages for web apps! This means your web app can now run even when the browser is closed, or even after system start-up. It's crazy powerful. You enable this by adding the "background" permission to your app manifest and then using a simple window.open() call with a special third parameter. The background page's state can be toggled with window.close(). Communication between the background page and your app is done using SharedWorkers. Oh, and Appmator now supports this too!
I'm excited to introduce "commently," a simple Buzz-based commenting system for blogs and websites. It synchronizes with your Buzz feed, allowing you to easily embed comments. Just replace the placeholders in the provided javascript snippet with your Buzz username and the URL-encoded title of your blog post, and customize the handler function to display the comments on your site. Check out this post for a quick "Getting Started" guide.
In this post, I address the question of how to detect Chrome Extension updates. While there isn't a single API call for this, we can achieve it using the Management API's onInstalled event, which fires upon both installation and updates. By maintaining a record of installed extensions and their versions, we can compare the version in the onInstalled callback with our existing record, identify updates, and notify the user when an update occurs.
This blog post introduces Omni Launch, a Chrome extension I built that lets you quickly launch installed web apps directly from the URL bar. Just type "go", followed by a TAB or SPACE, and then the app name. The extension searches your installed apps and provides suggestions as you type. I also explained the development process, which only took about 20 minutes, including setting up the manifest, hooking up event listeners for omnibox input changes and selections, and using the Management API to fetch and launch apps. The code is available on GitHub.
This blog post demonstrates how to create a Chrome extension that replaces the new tab page with an app launcher. The extension uses the Chrome Management API to retrieve a list of installed apps, displays their icons and names, and enables launching apps by clicking on their icons.
In appmator, I wanted to avoid traditional web elements like 'Save As' buttons. Instead, I implemented a drag-to-desktop feature using Chrome's drag-and-drop functionality. By setting a 'DownloadURL' with a data URI or regular URL on the 'dragstart' event, users can drag data directly to their desktop. This method bypasses the need for a save button. The code example demonstrates how to use the dataTransfer.setData() method with the DownloadURL type. It leverages the JSZip library to generate ZIP files as data URIs for dragging. This approach is Chrome-specific and has no feature detection available.
The Chrome Web Store isn't just about HTML5 and JavaScript; Flash plays a crucial role too! Flash apps and games are readily available in the store, with examples like Vyew and Paltalk showcasing functionalities not yet fully achievable with HTML5, such as webcam access. Getting your Flash content into the store is easy, either by using Appmator or directly packaging your SWF file. The store handles distribution and updates, eliminating bandwidth costs for developers. Focus on creating immersive experiences that utilize the full screen, like Canabalt, for maximum user engagement.
I've created Appmator, a tool to help developers get their web apps into the Chrome Web Store quickly. Just enter your app's URL, and Appmator generates a zip file ready for upload. Appmator is available in the Chrome Web Store and is built using some cool technologies like webfonts, Modernizr, jszip, and more. Source code is available on GitHub.
This post concludes the "Buzz This" Chrome Extension series by demonstrating how to add context menus. Context menus provide a powerful way to interact with users, letting them "Buzz" specific content like images or selected text, rather than the entire page. This is achieved by adding "contextMenus" to the permissions in the manifest file and then using chrome.contextMenus.create() in the background.html file. The create() method takes an object that defines the context menu's title, contexts (e.g., "page", "selection", "image", "link"), and an onclick event handler. The click handler determines the context of the click (selected text, image, link) and constructs the Buzz API URL accordingly. The code for the extension is available on GitHub.
I'm super excited about the new classList API! It's like having jQuery's class manipulation, but built right into the browser. This means we can easily add, remove, toggle, and check for classes without messy string parsing. Currently supported in Firefox 3.6+ and Chrome 7+, the classList API uses the DOMTokenList interface and is way more convenient. I'll have a better demo up on the blog soon!
In a previous post, I discussed the lack of a direct method in JavaScript for deleting arbitrary elements from an array. I had attempted a solution, but misread the documentation for Array.prototype.splice. While I believe my solution is still useful for removing elements without needing to find their indices first, splicedoes allow removing arbitrary elements by index. To remove one element at a specific position, use values.splice(index, 1). This modifies the original array and returns an array of the removed elements. Thanks to @dezfowler for pointing this out!
This tutorial provides a step-by-step guide to building a basic Chrome extension for posting to Google Buzz. We start by setting up the manifest file with the extension's name, version, and browser action details like the icon and tooltip. Then, we introduce a background page to handle the extension's logic, adding an event listener to detect clicks on the browser action button. Initially, we demonstrate how to display the current URL, and then extend the functionality to open a new tab directed to Google Buzz, pre-filled with the current URL for posting. The tutorial concludes by adding the 'tabs' permission to the manifest for enabling tab creation. Future enhancements will include fetching Buzz stats for the current URL, demonstrating cross-domain requests and browser_action interaction.
I've created a simple Chrome extension that lets you post the current page to Google Buzz and see its popularity. In upcoming posts, I'll use this example to demonstrate how easy it is to build Chrome Extensions and add cool features, like using Browser Actions, the Tabs API, Cross Domain Requests, and the Context Menu API. Check out the extension and its code on the Chrome Web Store and Github.
This post explores the challenge of removing specific elements from JavaScript arrays. It critiques the inefficient string manipulation method and introduces the filter() method (available in ECMAScript 5 compliant browsers) as a more elegant solution for removing elements by value. The post acknowledges the lack of a simple way to remove elements by index and hints at further discussion on this topic in a future post.
This post details how to use the HTML5 canvas element to dynamically create visually appealing custom markers for Google Maps. Instead of using a server to generate marker icons, we leverage the canvas API to draw rounded rectangles with gradients, center text within them and ultimately convert them into data URIs. Using the HSL color model allows for the creation of a harmonious range of colors by adjusting the hue while maintaining consistent saturation and luminance. This client-side approach offers flexibility and control over marker appearance, specifically highlighting techniques for rounded corners, gradients, and text centering. The code examples provided demonstrate the process of generating these markers and integrating them into a map.
I recently discovered a cool trick in WebKit that lets you use a canvas element as a background image, which opens up a ton of creative possibilities. It's a powerful feature, allowing for dynamic, programmatic manipulation of background images. I've included a simple demo showing how to draw a square on a div's background, but imagine the possibilities for games or complex animations! While currently WebKit-specific, I hope other browsers will adopt it soon. More demos to follow!
Tired of recursive DOM traversal headaches? Check out the DOM TreeWalker API! This powerful tool lets you efficiently navigate the DOM, filtering nodes as you go. It's perfect for tasks like finding specific text nodes or elements, highlighting content, or even building Chrome extensions. I've included a simple example of how to use TreeWalker to find and linkify Twitter usernames on a page. Give it a try and see how much easier DOM manipulation can be!
This blog post shares a method for integrating Google Calendar into a website using PHP and JavaScript, based on an article from ajax.phpmagazine.net. The author also expresses interest in syncing their Blogger blog with their calendar.
A big thanks to the first person who Dugg my Ajax Tagger on Digg (I think it was Zoodle)! I'd love to hear your feedback on it, good or bad. Let me know what you think! :)
This post explains AJAX (Asynchronous JavaScript and XML) and its use in .NET. AJAX allows web pages to update small sections without reloading the entire page, improving user experience. Traditional ASP.NET (1.x) struggles with this as it's designed to reload entire pages. However, .NET's flexible request pipeline allows plugins/HTTPHandlers to manage AJAX requests, enabling developers to execute specific methods within a page. The post lists several .NET AJAX frameworks, including AjaxPro, Arshad.NET, and AjaxAspects, and points readers to ajaxpatterns.org for more options.
I've updated the AJAX Tagger and moved it to a new subdomain: ajaxtag.kinlan.co.uk. This should make it easier to access. The old location (www.kinlan.co.uk/AjaxExperiments/AjaxTag2) will still work and be updated alongside the new address.
The OPML output functionality in my AJAXTagger is now fixed! There was a bug caused by Internet Explorer's lack of support for the __proto__ construct, affecting how the script determined an object's type. This fix resolves the issue, ensuring compatibility with IE6 and IE7.
I've had people come to my blog searching for how to do threading in JavaScript. Unfortunately, I haven't found a way to do true threading in JavaScript. The closest solution I've found involves creating queues that hold work items. Every 250ms (or a developer-defined interval), the queue checks if work needs to be done and starts a task if none is already running. This approach mimics threading. Check out my AJAX Tagger 2.0 for a working example. If you have any insights on true threading in JavaScript, I'd love to hear them!
I've updated my AJAX Tagger to Version 2! This release adds a simple but useful feature: you can now manually add your own tags in the tag list panel. This is really helpful when the Yahoo Developer Term Extraction API isn't sufficient for tagging, like when you need specific tags such as "C#" and ".Net".
This post explores how to create JavaScript expando objects within C#. I discuss how to achieve this effect using both client-side JavaScript manipulation from C# and by adding attributes to HTML elements server-side, similar to how tooltips extend WinForms classes. I also touch upon the potential for C# 3.0 to offer this functionality natively and the possible use of Reflection and ExtenderProviders for dynamic property addition.
I'm changing the focus of AJAXTag. Instead of just giving users related information, I want to let readers explore and discover connections themselves. I'll create an interactive version of my blog, allowing users to generate an OPML file of related data. This is inspired by Memorandum, but focuses on user exploration within areas of interest. What are your thoughts?
I've created a tool called DeliTag that automatically suggests tags for any page on kinlan.co.uk and submits them to your Delicious account. It's a quick process: hit "Goto", let the page load, click "Analyze" to see tag suggestions, choose the ones you like, enter your Delicious credentials, and press "Submit". Keep in mind, this currently only works on my site and requires IE6+ with Cross Domain Data Island support. Passwords are sent as plain text, mimicking Delicious's own method. Let me know if you'd like to see this developed further!
I've updated my AJAX application, DeliTag (The Delicious Tag Poster)! Now, when you select text within the IFRAME, the application will analyze only the selected text instead of the entire page. This makes tagging much more precise.
I'm struggling to understand the practical uses of OPML, especially given the inconsistent use of attributes like 'type', 'url', and 'xmlurl'. While I'm developing a JavaScript OPML object model for my own projects (like a tagging system where OPML stores related links for blog posts), I haven't found a clear standard for defining outlines. It seems like the 'standard' emerges from popular usage rather than formal specification. I'm particularly interested in how to determine the file type of items within an OPML outline, as my current application only uses links for pages and images (feed support is still pending). The lack of clear semantics in OPML makes it difficult to build dynamic applications that can 'mash up' content from different sources based on the OPML structure.
I've updated the OPML JavaScript Object Model to support OPML Attributes for Outlines, increasing flexibility for developers. I've also incorporated an instanceOf method (source unknown - please let me know if you recognize it!) to add type checking when inserting OPMLOutlineAttributes into the attribute array. The added instanceOf function is as follows:
functioninstanceOf(object, constructor) while (object!=null) { if (object==constructor.prototype) returntrue; object=object.__proto__; } returnfalse; }
I've encountered a bug in IE6/7 where dynamically created checkboxes lose their checked status after being added to the document. Setting the checked property after appending the element seems to be a reliable workaround. If anyone knows why this behavior occurs, please contact me!
I came across Matt Harrison's post discussing the challenges of choosing between various Ajax toolkits and frameworks, and it really resonated with me. He highlighted the OSA Foundation's survey of Ajax/JavaScript libraries, which covers a wide range of options like Dojo, DWR, JSON-RPC-JAVA, MochiKit, Prototype, Rico, SAJAX, Scriptaculous, Xajax, and Sack. It's fascinating to see how these libraries address different aspects of Ajax development. This makes me rethink my recent work on the backend XMLHttpRequest for Ajax Tagger Version 2, and whether leveraging existing solutions may have been more efficient. Links to the OSA Foundation, Michael Mahemoff's framework information, and my own previous blog post on Ajax layers are included for further exploration.
I'm working on a JavaScript Object Model for OPML and have found areas for improvement. I initially misunderstood the OPML spec, particularly regarding the attributes of the outline element, which are more flexible than I realized. This is important for handling things like files, links, HTML, and RSS. The current model has issues with proper quoting of characters like quotes and ampersands, but otherwise, the generated OPML XML seems good. I'll be updating the model to handle these attributes soon and will post more about the specific attributes in a future post.
I've updated Ajax Tagger Version 1 to clean up how Wikipedia article titles appear in search results. It now removes the "- Wikipedia, The Free Encyclopedia" suffix. This improves the result list's readability and still adheres to Wikipedia's linking policy by referencing Wikipedia elsewhere in the implementation.
I've created a basic JavaScript Object Model for OPML, which you can find here: http://www.kinlan.co.uk/AjaxExperiments/opml.js. It's not entirely finished yet, but the core structure is in place. I plan to write proper documentation soon.
I'm developing a JavaScript Object Model for OPML, a first as far as I know! This is essential for my AJAX Tagger version 2, enabling dynamic OPML creation, flexible saving options, and real-time user interaction updates.
I'm developing a complex new version of the AJAX Tagger (2.0) with enhanced features for adding data to journal entries. However, I'm curious if there's still interest in a simpler version like the original AJAX Tagger. Please share your thoughts and comments!
Quick update on the AJAX Tagger 2 development. Priority queues are working well, but might need some tweaking on queue numbers and polling intervals. Querying Technorati for tag counts is proving slow (around 2 seconds per query). Any tips on speeding this up, perhaps by limiting the number of blogs returned? Currently working on improving the related documents results, which now includes counts for all selected tags, not just the last search. Need to refine this area. More updates to come!
I'm working on AJAX Tagger 2 and have implemented a priority queue system for AJAX requests. This system uses 5 queues and prioritizes urgent requests by placing them in the fastest cycling queue. Less urgent requests go into slower queues. Check out the demo to see how tag requests are prioritized and tag stats are fetched on a slower queue. The whole page is asynchronous!
Just a quick update on what I've been up to. I'm still working on AJAXTagger v2 whenever I can. It's coming along, but I ran into a few JavaScript issues. Big shoutout to the Dream Projections blog for a post that really helped me figure out how to call JavaScript Object methods with setInterval – super useful for the priority queueing system I'm building.
I'm exploring ways to implement continuous polling of a resource and event dispatching based on its state in JavaScript. Are worker threads, or something similar, achievable in JavaScript? Currently, timer-based triggers seem like the most viable option. Is this an acceptable approach, or are there more efficient and appropriate alternatives?
This post kicks off documenting the requirements for the next version of AJAXTagger. The goal is to create a successful application (by my definition) by outlining features across functional areas, UI/UX, client/server-side business logic, data access, and dependencies. Key features include easy journal tagging, related information retrieval (tags, articles, blogs, websites), diverse search provider integration, streamlined results presentation, image inclusion, and efficient article pulling/saving. The UI should minimize user effort, provide immediate feedback, and offer information hiding. Performance is crucial, targeting IE6/7 and Firefox, with emphasis on minimal server round trips, client-side optimization, and error handling. Data storage is preferably client-side, with external access optimized for speed and resilience. External dependencies include various search engines/services, while internal constraints involve limited server access and reliance on HTML, JavaScript, and XmlHTTPrequest.
I'm developing the next version of the AJAX Tagger and need your input! Currently, it enhances blog posts by linking to related blogs on Technorati. For the next version, I'm wondering if you'd prefer it to link to other sources like IceRocket or even Wikipedia. Let me know where you'd find automatic tag links most valuable.
I just got my first comment from a stranger, Gaby de Wilde, on my AJAX Tagger! He even used it on his site. I'm grateful for the feedback and plan to incorporate his suggestions in the next version. Check out his blog to see it in action, and please send me any feedback you have!
The MSN Search API is now available. I haven't explored it fully yet, but here are some useful links: Why MSN Search?, the MSN Search API download, the SOAP service description, and the developer registration form (requires a .Net Passport). The API appears to be a SOAP service, so I'll likely need to create a proxy for direct calls. The download includes documentation and sample projects. You'll need an application ID, similar to Yahoo's. If you know how to use SOAP with JavaScript, please email me!
Microsoft's Start.com has launched a new developer API, possibly using the ATLAS framework (precursor to ASP.NET AJAX). It seems to focus on creating JavaScript-based "Gadgets," similar to RSS consumers, that need to be hosted on a server. The API also requires enabling cross-domain data sources in Internet Explorer, a topic I've discussed previously.
In this final part of "The Failures of my First AJAX Application" series, I reflect on the cross-browser compatibility issues I encountered. Focusing on Internet Explorer during development led to problems in Firefox, particularly with security errors (cross-domain data retrieval) and differences in the XML DOM model. The key takeaway is to consider cross-browser support from the outset, anticipating discrepancies between browsers and coding around missing features, similar to CSS development. The next version will prioritize cross-browser compatibility, potentially including Safari. This series has been invaluable for shaping the requirements of the upcoming version.
This is the sixth part of my series on the failures of my first AJAX application, AJAXTagger. While I initially hoped it would be useful for everyone, it mainly ended up benefiting just me by simplifying the tagging process for my blog posts. Although it didn't meet my initial grand expectations, it was a valuable learning experience. The next version will prioritize my needs but also consider features that could benefit other users, ultimately adding value for my readers.
In this part of my series on my first AJAX application, I discuss how my initial hopes for AJAX as a solution to bandwidth and UI problems, and for speed improvements, weren't fully realized. The first version, which incorporated Technorati stats and Yahoo's TermExtraction API, was slow due to sequential queries and Technorati's performance. I removed these features because I wanted a fully rendered page, which negated AJAX benefits. The next version will be fully asynchronous, with a request manager for trickle filling and background processing. Check out my AJAX Technorati Tagger to see what I'm aiming for.
This blog post, the third in a series about my first AJAX application, focuses on the disastrous visual design. While the functionality is there, the UI is frankly terrible. I've realized my design skills are lacking, and though I have a vision, I struggle to bring it to life visually. The problem is that the application's logic is tightly coupled to the UI. Moving forward, I need to decouple these components. The next version will have a UI-agnostic data structure that the UI can interrogate. This separation will allow me to work on the AJAX framework, business logic, and UI independently.
In part two of this series on the failures of my first AJAX application, I discuss how my initial plan to reduce bandwidth by having the client directly access third-party web services didn't work out. Due to cross-domain scripting issues in Firefox and IE 6/7, I had to implement proxy scripts on my server. This means all client requests now go through my server, increasing my bandwidth demands. While using a proxy server offers benefits like hiding security information (like Technorati developer tokens) and enabling data manipulation/request merging, it comes with the major downside of increased bandwidth usage and the need to create/maintain proxy scripts. I hope to support cross-domain data sources in the next version to mitigate these issues but acknowledge there might still be scenarios where proxy scripts are necessary.
In this third part of my series on my first AJAX application, I'm diving into the power of the Yahoo! API. It's been a learning experience, and I'm incredibly impressed with how much it offers compared to Google's. I've been exploring the Term Extraction and Related Searches APIs, and I'm starting to think about how to use the Contextual Search API. The Term Extraction API is great for pulling out keywords, while the Related Searches API helps me find relevant search queries. My app combines these to analyze blog posts and generate related searches. I'm hoping to use the Contextual Search API to add targeted search results. Future versions will incorporate more APIs, possibly from Technorati, to enhance functionality. Stay tuned!
In part 2 of my AJAX application journey, I'm tackling browser compatibility issues between Firefox and Internet Explorer. Key differences include handling XML node text, event triggers for synchronous XmlHttpRequests, and table object model inconsistencies. Looking ahead, I'm planning to componentize my JavaScript for better management and browser caching, and create an event-driven object model for my next application to improve structure and cross-browser functionality. My focus will be on supporting the lowest common denominator for broader browser compatibility.
IE7's synchronous XmlHttpRequest locks up all browser tabs during long requests, not just the active tab. Is this behavior expected or a bug? If you've encountered this problem, please email me so I can investigate further.
In this first installment of a series about my AJAX application journey, I'm sharing my initial success: learning to think asynchronously. The current app takes user-entered text, sends it to a Yahoo web service (via a local Perl script), gets "interesting" words, and then makes synchronous calls to Technorati for tag counts. This synchronous approach locks the browser, especially with multiple tags. The next version will use a queue and multiple asynchronous XMLHttpRequest objects managed by a thread manager to avoid browser lock-up. This will create a more responsive app where results appear as they become available. Key requirements for v2 include full asynchronicity, XMLHttpRequest management, a generic work queue, background task indicators, and a non-blocking UI. I'm also planning to develop a reusable object model.
My first foray into Ajax was a mixed bag, yielding both valuable lessons and frustrating setbacks. On the plus side, it sparked a deeper understanding of asynchronous coding, cross-browser compatibility (especially between Firefox and IE), and the potential of APIs like Yahoo! and Technorati. It also reignited my interest in Perl and prompted reflection on my blogging practices. However, the application fell short in several areas: it lacked search functionality, didn't reduce bandwidth, had a poor visual design, and wasn't user-friendly or impactful enough to generate feedback or traffic. Moving forward, I'll share my design process and desired improvements, starting with a clear requirements document. I'm eager to learn from this experience and create a more effective application.
I've noticed a difference in how Internet Explorer (versions 6 and 7) and Firefox handle synchronous XmlHttpRequests. In both browsers, you can send requests using JavaScript. However, after the synchronous send() call, Internet Explorer still triggers the onreadystatechange event, while Firefox does not. I need to research which behavior is correct and according to spec. If you happen to know, please email me!
I've created my first AJAX application, an AJAX Technorati Tagger, which can be found here. It allows users to input text, receive suggested Technorati tags (powered by Yahoo's web service), and generate a list of related keywords. It's still a work in progress with some bugs, but feedback is welcome!
I'm developing an AJAX application to automatically generate Technorati, Feedster, and MSN search boxes with relevant tags for my blog posts. It's a JavaScript webservice queryer that uses results from one service as input for another. Currently, it only supports IE6/7 due to cross-domain data source import restrictions in Firefox. I'm exploring JavaScript code signing as a potential solution. The application integrates with Yahoo webservices, with plans to include Technorati and hopefully Feedster. There are security concerns regarding my Yahoo key. I aim to have a prototype available for feedback soon.