Friday, June 3, 2011

Web-1.0 → Web-2.0 From Web Documents To Web Applications!

The move to Web-2.0 coincides with the move from a web of documents to a web of applications — where Web Applications are in turn built out of web parts see Toward 2^W — Beyond Web-2.0. Web Applications were now significantly easier to build — one no longer needed to build such applications as browser plugins

Leveraging The Web Browser For Accessibility

At the end of 2002, I attended a Mozilla Developer Day where I saw what would could be done within the browser using HTML, XUL, CSS and XBL. Alphabet soup aside, the combination of these technologies created the potential for writing powerful Web applications without resorting to custom plugins and platform-specific C or C++ code. I spent a few weeks at the end of that year on building TalkZilla — speech extension for Mozilla, but gave up after failing to successfully implement Text-To-Speech within the platform using XPCom in the 2 weeks I had alloted myself. But in the process, it became evident that sooner or later, it would become possible to build the next generation of access technologies purely within the browser.

Fire Vox — A Talking Extension for Firefox

In fall of 2005, I moved on from my work on W3C XForms and revisited the possibility of building access technology in to the browser when I started at Google. This time around, I decided to expose the Text-To-Speech layer as a local HTTP server, and accessed the service using XML HTTP Request in Firefox — the layer that had been hard to build in 2002 was now implementable in under a day. I began seriously exploring the browser-based accessibility solution route once again, and coincidentally discovered Charles Chen's work on Fire Vox. As it turned out, he had done the rest of the work — using platform-specific speech services such as SAPI, he had created a Firefox extension that not only provided spoken access to the document-oriented Web-1.0 — his work demonstrated the power of browser-based access technologies by delivering the first implementation of W3C ARIA within Firefox 1.5.

Web Applications And Spoken Access

In fall of 2007, Charles joined Google, and we began exploring the next phase in browser-based access. One thing that became apparent from the Fire Vox experience, as well as what we had all learnt from different screenreaders was that at the end of the day, one needed application-specific scripts to enhance the base-level spoken access provided by the screenreader. Traditionally, screenreaders implement such application-specific extensions in a screenreader specific scripting language — as we investigated implementing access technologies out of Web Technologies, we created a framework for application-specific scripting in JavaScript. This led to the AxsJAX project, where we implemented a framework for scripting Web Applications within Firefox to produce context-specific spoken feedback via the user's screenreader.

From Greasemonkey To Chrome Extensions

By the end of 2008, we felt we had learnt all we could from the AxsJAX project. The AxsJAX project leveraged the Greasemonkey extension in Firefox to add application-specific scripting implemented in JavaScript. The creator of Greasemonkey by then had started implementing the Chrome extension framework — Chrome extensions draw heavily from the Greasemonkey experience. With Chrome beginning to provide an increasingly viable platform for creating Web Applications out of pure Web technologies (HTML, Javascript and CSS), we started leveraging this platform for building a complete access solution authored using Web technologies.

Exposing Platform Services To Web Applications

Chrome extensions are written in HTML, CSS and Javascript. These extensions get full access to the Document Object Model (DOM) of the pages being viewed. This meant that we could implement a large portion of the access solution in pure JavaScript. What's more, as a Web Application, implementing access to dynamic Web pages proves no harder than providing access to static content — thus, we were able to implement ARIA support from the very beginning of the project.

However, not everything on a platform can be implemented via JavaScript (at least not yet). Today, Text-To-Speech is still implemented in native code, but see speech synthesis in your browser from Mozilla for the shape of things to come. In addition, in the case of ChromeOS, some parts of the user interface were being implemented using the underlying windowing toolkit. For our ChromeVox solution on Chrome OS, we exposed these to the Javascript layer via extension APIs — with those APIs in place, we could then implement ChromeVox entirely in JavaScript.

Conclusion: And The Best Is Yet To Come!

As the Web platform continues to evolve, with the Web browser able to access an ever-increasing of platform-specific services that are in turn exposed to Web Applications via JavaScript, we are only beginning to scratch the surface with respect to what can be built in the space of web-based access technologies.

Web As A Platform For Universal Access

Universal Information Access In Web 1.0

The next few posts will cover the evolution of the Web as aplatform for universal information access. As the Web has evolved from a web of documents to a web of applications, the Web Browser — the software used by the majority of users to view the Web has itself evolved from being a document viewer to an application container. Through the last 10 years,the focus has been on turning the Web browser into a platform for delivering interactive applications — witness the progress from XML HTTP Request (XHR) and AJAX applications as epitomized by Google Maps to the formalization of Web Applications in the context of HTML5. The focus of these posts is to trace the parallel evolution of the affordances needed to turn the Web browser into a platform for delivering adaptive technologies to promote universal access.

Browser-Based Access Technologies In Web 1.0

The 1990's saw the first attempt to build a browser-based software platform within the mainstream world with the ascent ofNetscape. Though that attempt fizzled out, it laid the foundations for much of what we see today in the form of Web Applications and cloud computing. In parallel, the accessibility world saw the development of talking Browsers — the first of these was PW WebSpeak from Productivity Works, closely followed by IBM Home Page Reader. Like the Netscape browser of the 1990's, neither of these solutions survived — and part of the analysis that follows is an attempt to sketch out how the world of Web programming has changed in the 15 years since.

Things to observe from Web 1.0:

  • The focus in the 1990's was on Web documents with small islands of interactivity created via HTML forms.
  • The document-based Web made all web interaction transactional, thereby requiring server-side round trips at the end of every forms-based interaction.
  • Extending Web browsers with additional functionality was hard — accessibility solutions built using the browser had to be implemented either as a browser plug-in, or by embedding the browser within your own application.

The final point above is perhaps the most significant reason for why browser-based accessibility solutions remained hard to implement — in that period, accessibility like Web Applications in general could not be implemented using Web technologies.

From A Web Of Documents To A Web Of Applications

The next article in this series will detail the transition from a Web of documents to a Web of applications, and analyse the consequences for building web-based access technologies.

Sunday, May 29, 2011

Introducing The ChromeVox Blog

ChromeVox represents the next step in leveraging Web technology for improving the state of universal information access. This blog will cover the history and evolution of such solutions and lay out the long-term vision for access technologies built on the Web.

References