I came across Matt Harrison's post discussing the challenges of choosing between various Ajax toolkits and frameworks, and it really resonated with me. He highlighted the OSA Foundation's survey of Ajax/JavaScript libraries, which covers a wide range of options like Dojo, DWR, JSON-RPC-JAVA, MochiKit, Prototype, Rico, SAJAX, Scriptaculous, Xajax, and Sack. It's fascinating to see how these libraries address different aspects of Ajax development. This makes me rethink my recent work on the backend XMLHttpRequest for Ajax Tagger Version 2, and whether leveraging existing solutions may have been more efficient. Links to the OSA Foundation, Michael Mahemoff's framework information, and my own previous blog post on Ajax layers are included for further exploration.
This post discusses the security implications of cross-domain XMLHttpRequest access. While some argue that such access increases the risk of phishing attacks and unauthorized data access, others contend that these risks are minimal and that the benefits of cross-domain access, such as reduced bandwidth costs for "mash-up" applications, outweigh the potential downsides. The current security model, which requires proxying requests through the originating server, is seen as costly. I propose a server-side security model where third-party servers can control which clients can directly access their data, addressing the bandwidth theft concerns.
This post kicks off documenting the requirements for the next version of AJAXTagger. The goal is to create a successful application (by my definition) by outlining features across functional areas, UI/UX, client/server-side business logic, data access, and dependencies. Key features include easy journal tagging, related information retrieval (tags, articles, blogs, websites), diverse search provider integration, streamlined results presentation, image inclusion, and efficient article pulling/saving. The UI should minimize user effort, provide immediate feedback, and offer information hiding. Performance is crucial, targeting IE6/7 and Firefox, with emphasis on minimal server round trips, client-side optimization, and error handling. Data storage is preferably client-side, with external access optimized for speed and resilience. External dependencies include various search engines/services, while internal constraints involve limited server access and reliance on HTML, JavaScript, and XmlHTTPrequest.
I just read on the IE Team's Blog that Internet Explorer 7 will have native support for XMLHttpRequest and a rebuilt, windowless select element. This is huge! Native XMLHttpRequest means no more ActiveX security issues. And a windowless select element? Finally, we might have proper layering and styling. Fingers crossed these features make it into Beta 2!
I've been reflecting on the direction of my blog, "C#, .Net Framework." I feel the name is too limiting, given my recent posts on topics like IE7, AJAX, Firefox, and XMLHttpRequest. I plan to broaden the scope while keeping the content technical. I also want to increase reader interaction, possibly by crowdsourcing a new name for the blog.
In part 2 of my AJAX application journey, I'm tackling browser compatibility issues between Firefox and Internet Explorer. Key differences include handling XML node text, event triggers for synchronous XmlHttpRequests, and table object model inconsistencies. Looking ahead, I'm planning to componentize my JavaScript for better management and browser caching, and create an event-driven object model for my next application to improve structure and cross-browser functionality. My focus will be on supporting the lowest common denominator for broader browser compatibility.
IE7's synchronous XmlHttpRequest locks up all browser tabs during long requests, not just the active tab. Is this behavior expected or a bug? If you've encountered this problem, please email me so I can investigate further.
In this first installment of a series about my AJAX application journey, I'm sharing my initial success: learning to think asynchronously. The current app takes user-entered text, sends it to a Yahoo web service (via a local Perl script), gets "interesting" words, and then makes synchronous calls to Technorati for tag counts. This synchronous approach locks the browser, especially with multiple tags. The next version will use a queue and multiple asynchronous XMLHttpRequest objects managed by a thread manager to avoid browser lock-up. This will create a more responsive app where results appear as they become available. Key requirements for v2 include full asynchronicity, XMLHttpRequest management, a generic work queue, background task indicators, and a non-blocking UI. I'm also planning to develop a reusable object model.
I've noticed a difference in how Internet Explorer (versions 6 and 7) and Firefox handle synchronous XmlHttpRequests. In both browsers, you can send requests using JavaScript. However, after the synchronous send() call, Internet Explorer still triggers the onreadystatechange event, while Firefox does not. I need to research which behavior is correct and according to spec. If you happen to know, please email me!
I've been exploring how Microsoft's Start.com retrieves data from external web feeds. It appears they use a server-side script to tunnel requests to the remote server, effectively acting as a proxy. This workaround is necessary due to browser security restrictions that prevent cross-domain data fetching in Firefox and certain Internet Explorer configurations. Consequently, my AJAX application will need to handle the additional bandwidth required for retrieving data from Yahoo and Technorati directly, as redirecting XMLHttpRequest calls isn't a viable option.