Thursday, March 28, 2013

Navigating Specialized Content In ChromeVox (R26)

Navigating Specialized Content In ChromeVox (R26)

1 Navigating Specialized Content In ChromeVox (R26)

1.1 Introduction

ChromeVox provides numerous ways of efficiently navigating Web content with a view to easily obtaining multiple views based on the task at hand. ChromeVox 26 enables the efficient navigation of specialized content such as tables and mathematical markup (MathML) — the goal is to create a flexible design that allows us to add more types of specialized content in the future. Here is a brief explanation of the design rationale underlying this new functionality. Note that we are continuing to refine this usage model, and all constructive feedback is welcome via the Axs-Chrome-Discuss Google Group.

1.2 A Brief Recap Of Content Navigation

As a precursor to explaining how we navigate special content such as tables and MathML, let's first recap the way we currently navigate ordinary content. The notion of granularity is a basic underlying concept that is central to ChromeVox. As one reads through any content, we naturally group things into chunks. In the user interface, we call these characters, words, lines, objects and groups — this hierarchy is motivated by common usage in everyday language.

When it comes to special content, the associated groupings become more interesting. Tables, for example, have row and column cells. However, one can still view them through the lens and vocabulary of everyday language (character, words, and lines). ChromeVox now allows users to apply either lens to tables; this means that you can now view a table as being made up of lines, words and characters, or alternatively, as being made up of rows, columns and cells. More interestingly, you can easily switch among these two views.

1.2.1 Illustrative Example

Here is a small table that contains a sample class schedule. In practice, you may either want to read this information as a sequence of lines, or alternatively, browse using the underlying tabular structure.

Notice that as you navigate this document, you hear ChromeVox announce the table upon first encountering it; however, ChromeVox navigation continues to treat it as a series of lines, words and characters. For a quick reading of the class schedule this is adequate in this case.

TimeClassLocation
11:00 - 11:45Calculus 101100
12:00 - 12:45Physics 101200
13:30 - 14:15Chemistry 101300

Next, let's move back to the above table and browse it using the underlying tabular structure — for instance, you may wish to do this with a larger class schedule when quickly looking for a specific class.

Using your present ChromeVox granularity move back to the table – hint: using granularity group will get you there the fastest. When you hear ChromeVox announce the table, switch to to table navigation by pressing CVOX+\ (ChromeVox BackSlash). Table navigation provides two granularities — row and column, and the default is row granularity. So now, ChromeVox navigation moves by rows, and the current cell is announced as you traverse the table. You can switch between row and column granularity using the same keys that you would normally use to switch between line, word and character granularities. You can exit table navigation by pressing CVOX+Backspace (ChromeVox Backspace).

1.2.2 Other Specialized Content

As we enhance our support for other types of specialized content, e.g., MathML, you will be able to use command CVOX+\ (ChromeVox Backslash) to enter math navigation mode.

1.3 Conclusion

In summary, our design goals for ChromeVox's navigation model are as follows:

  1. Easily navigate through different types of content with a common set of keyboard commands.
  2. Enable context-specific navigation of specialized content without the need to learn additional special keys.
  3. Enable the user to view the same piece of content via different lenses to obtain multiple views of the same content.

We welcome feedback about this navigation design or other comments at our axs-chrome-discuss Google group.

– David Tseng and the ChromeVox team.

Date: 2013-03-27 Wed

Author: T.V Raman

Org version 7.9.3f with Emacs version 24

Validate XHTML 1.0

Friday, January 4, 2013

ChromeVox R25: Keymap Design Overview

cvox-keymap-overview.org

1 ChromeVox Keymap Design

Keymaps in chromeVox R25 have gone through a major overhaul motivated by the need to:

  • Enable keymaps that are consistent with the look and feel of different platforms.
  • Enable ergonomic keybindings for different user communities.
  • Reduce the need for chording, i.e. the need to hold down multiple keys at a given time.

This article gives a high-level overview of the underlying design and user-model.

1.1 Design Overview

ChromeVox provides a rich set of end-user commands that need to be available anywhere within a Web application. Consequently, the key assignments for these commands need to avoid conflicts with Chrome, as well as the underlying platform that Chrome is being run on, e.g., ChromeOS or Windows.

1.1.1 ChromeVox Modifier

Early versions of ChromeVox required the ChromeVox Modifier to be held down to invoke ChromeVox commands. Early releases picked reasonable defaults for the ChromeVox Modifier on different platforms, e.g., Ctrl+Alt on Windows, and Shift+Search on ChromeOS. Starting with R25, we enable users to configure the ChromeVox Modifier key via the ChromeVox Options page … you simply press the desired combination of modifier keys while in the appropriate edit field in the options page.

1.1.2 ChromeVox Prefix

Starting in ChromeVox R25, we provide an alternative mechanism for invoking ChromeVox commands, called the ChromeVox Prefix. Here, you get to pick a ChromeVox Prefix key of your choice … in my case I use Ctrl ;. The ChromeVox Prefix key can be set via the ChromeVox Options page by typing a single character into the appropriate edit field … so in my case, I pressed ; after first deleting the default assignment of Ctrl z.

The ChromeVox Prefix differs from the ChromeVox Modifier in several important ways:

  • You do not need to hold the chromeVox Prefix down while pressing other keys. This eliminates the need for complex key-chords. As an example, you can invoke continuous reading, i.e. ChromeVox+r by pressing the prefix key Ctrl ; and then pressing r.
  • You get access to a lot more keys! As an example, you can unambiguously assign ChromeVox commands to different variations of a given key, e.g., (h,Shift+h, …).

ChromeVox R25 provides an initial set of keymaps; over the next few releases, we hope to provide more custom keymaps, as well as introduce the ability for users to load entirely custom keymaps.

Friday, April 27, 2012

ChromeVox Technical Overview

As a screenreader built using pure Web technology, ChromeVox leverages many aspects of the evolving browser platform. We have put together a detailed technical overview of the design and architecture of ChromeVox in the form of a technical report. The report is published both as a PDF document as well as HTML. The content was authored in LaTeX and converted to HTML using the excellent TeX4HT package. Note that the figures in the HTML version lack the visual fidelity of the PDF version, however the HTML version loses no information and may work better with access technologies.

Friday, June 3, 2011

Web-1.0 → Web-2.0 From Web Documents To Web Applications!

The move to Web-2.0 coincides with the move from a web of documents to a web of applications — where Web Applications are in turn built out of web parts see Toward 2^W — Beyond Web-2.0. Web Applications were now significantly easier to build — one no longer needed to build such applications as browser plugins

Leveraging The Web Browser For Accessibility

At the end of 2002, I attended a Mozilla Developer Day where I saw what would could be done within the browser using HTML, XUL, CSS and XBL. Alphabet soup aside, the combination of these technologies created the potential for writing powerful Web applications without resorting to custom plugins and platform-specific C or C++ code. I spent a few weeks at the end of that year on building TalkZilla — speech extension for Mozilla, but gave up after failing to successfully implement Text-To-Speech within the platform using XPCom in the 2 weeks I had alloted myself. But in the process, it became evident that sooner or later, it would become possible to build the next generation of access technologies purely within the browser.

Fire Vox — A Talking Extension for Firefox

In fall of 2005, I moved on from my work on W3C XForms and revisited the possibility of building access technology in to the browser when I started at Google. This time around, I decided to expose the Text-To-Speech layer as a local HTTP server, and accessed the service using XML HTTP Request in Firefox — the layer that had been hard to build in 2002 was now implementable in under a day. I began seriously exploring the browser-based accessibility solution route once again, and coincidentally discovered Charles Chen's work on Fire Vox. As it turned out, he had done the rest of the work — using platform-specific speech services such as SAPI, he had created a Firefox extension that not only provided spoken access to the document-oriented Web-1.0 — his work demonstrated the power of browser-based access technologies by delivering the first implementation of W3C ARIA within Firefox 1.5.

Web Applications And Spoken Access

In fall of 2007, Charles joined Google, and we began exploring the next phase in browser-based access. One thing that became apparent from the Fire Vox experience, as well as what we had all learnt from different screenreaders was that at the end of the day, one needed application-specific scripts to enhance the base-level spoken access provided by the screenreader. Traditionally, screenreaders implement such application-specific extensions in a screenreader specific scripting language — as we investigated implementing access technologies out of Web Technologies, we created a framework for application-specific scripting in JavaScript. This led to the AxsJAX project, where we implemented a framework for scripting Web Applications within Firefox to produce context-specific spoken feedback via the user's screenreader.

From Greasemonkey To Chrome Extensions

By the end of 2008, we felt we had learnt all we could from the AxsJAX project. The AxsJAX project leveraged the Greasemonkey extension in Firefox to add application-specific scripting implemented in JavaScript. The creator of Greasemonkey by then had started implementing the Chrome extension framework — Chrome extensions draw heavily from the Greasemonkey experience. With Chrome beginning to provide an increasingly viable platform for creating Web Applications out of pure Web technologies (HTML, Javascript and CSS), we started leveraging this platform for building a complete access solution authored using Web technologies.

Exposing Platform Services To Web Applications

Chrome extensions are written in HTML, CSS and Javascript. These extensions get full access to the Document Object Model (DOM) of the pages being viewed. This meant that we could implement a large portion of the access solution in pure JavaScript. What's more, as a Web Application, implementing access to dynamic Web pages proves no harder than providing access to static content — thus, we were able to implement ARIA support from the very beginning of the project.

However, not everything on a platform can be implemented via JavaScript (at least not yet). Today, Text-To-Speech is still implemented in native code, but see speech synthesis in your browser from Mozilla for the shape of things to come. In addition, in the case of ChromeOS, some parts of the user interface were being implemented using the underlying windowing toolkit. For our ChromeVox solution on Chrome OS, we exposed these to the Javascript layer via extension APIs — with those APIs in place, we could then implement ChromeVox entirely in JavaScript.

Conclusion: And The Best Is Yet To Come!

As the Web platform continues to evolve, with the Web browser able to access an ever-increasing of platform-specific services that are in turn exposed to Web Applications via JavaScript, we are only beginning to scratch the surface with respect to what can be built in the space of web-based access technologies.

Web As A Platform For Universal Access

Universal Information Access In Web 1.0

The next few posts will cover the evolution of the Web as aplatform for universal information access. As the Web has evolved from a web of documents to a web of applications, the Web Browser — the software used by the majority of users to view the Web has itself evolved from being a document viewer to an application container. Through the last 10 years,the focus has been on turning the Web browser into a platform for delivering interactive applications — witness the progress from XML HTTP Request (XHR) and AJAX applications as epitomized by Google Maps to the formalization of Web Applications in the context of HTML5. The focus of these posts is to trace the parallel evolution of the affordances needed to turn the Web browser into a platform for delivering adaptive technologies to promote universal access.

Browser-Based Access Technologies In Web 1.0

The 1990's saw the first attempt to build a browser-based software platform within the mainstream world with the ascent ofNetscape. Though that attempt fizzled out, it laid the foundations for much of what we see today in the form of Web Applications and cloud computing. In parallel, the accessibility world saw the development of talking Browsers — the first of these was PW WebSpeak from Productivity Works, closely followed by IBM Home Page Reader. Like the Netscape browser of the 1990's, neither of these solutions survived — and part of the analysis that follows is an attempt to sketch out how the world of Web programming has changed in the 15 years since.

Things to observe from Web 1.0:

  • The focus in the 1990's was on Web documents with small islands of interactivity created via HTML forms.
  • The document-based Web made all web interaction transactional, thereby requiring server-side round trips at the end of every forms-based interaction.
  • Extending Web browsers with additional functionality was hard — accessibility solutions built using the browser had to be implemented either as a browser plug-in, or by embedding the browser within your own application.

The final point above is perhaps the most significant reason for why browser-based accessibility solutions remained hard to implement — in that period, accessibility like Web Applications in general could not be implemented using Web technologies.

From A Web Of Documents To A Web Of Applications

The next article in this series will detail the transition from a Web of documents to a Web of applications, and analyse the consequences for building web-based access technologies.

Sunday, May 29, 2011

Introducing The ChromeVox Blog

ChromeVox represents the next step in leveraging Web technology for improving the state of universal information access. This blog will cover the history and evolution of such solutions and lay out the long-term vision for access technologies built on the Web.

References