My Top Takeaways From the 2016 Edge Web Summit

Earlier this week, my colleagues on the Microsoft Edge team put on the second of what has now become an annual event: the Edge Web Summit. The format was a little different this year, with team members from across the organization delivering quick, punchy 30-minute talks on topics ranging from standard implementations to the user experience of a browser to real-time communications. I live-tweeted quite a few of the talks, but I thought I’d provide a bit of a round-up of what was revealed, discussed, and more so you can read it all in one place.

  • Since launching Edge 8 months ago, the team has pushed 12 update releases, 128 new features, and 6,527 bug fixes!
  • The team has launched a new, highly transparent bug tracker for Edge: issues.microsoftedge.com.
  • The Edge team has done a ton of research into what specs are being used and how they are being used on the open Web. They are starting to share this information on data.microsoftedge.com. It currently consists of 2 parts: 1) usage data from real sites that looks at not only CSS properties in use, but values too; and 2) a catalog of available APIs and a detailed analysis of browser support, down to specific configuration and property values.
  • Hot on the tails of RemoteIE opening up for Linux users, RemoteEdge is coming soon! Jacob Rossi showed a screenshot of an Edge instance running on Azure, within Chrome. So cool!
  • Text-to-speech directly from within JavaScript!
  • The Fetch API!
  • Beacons as an alternative to blocking JavaScript requests for telemetry data: navigator.sendBeacon( uri, data ).
  • Web notifications!
  • WOFF 2 font support for better compression and faster downloads/decompression!
  • The team is currently prototyping and investigating Service Workers, Push Notifications, Shadow DOM, Custom Elements, Web Payments, Web Assembly, and ES Modules.
  • Cortana in Edge has gotten some major upgrades, such as being able to “Ask Cortana” about an image to get more information (like a recipe for the cookies you saw on Pinterest that did’t include a link).
  • Microsoft open-sourced the CSS crawler powering their data portal so other browser vendors can run it too.
  • FIDO-based login (like Windows Hello) is coming to the Web!
  • Microsoft’s Narrator screen reader now supports a “Developer Mode” that blanks out the current app (such as your browser window) in order to enable you to more easily debug accessibility issues.
  • The F12 tools in Edge now enable you to view the previously mysterious Accessibility Tree in addition to allowing you to drill more deeply into the various properties of an element that relate to its accessibility.

I didn’t take a ton of notes in the second half of the day as I was prepping for my own session on accessibility, but other highlights included building & debugging extensions for Edge (tldr; you can easily port Chrome extensions) and cool things you can do using Continuum.

Overall, the event was incredibly informative and has me really excited about the work the Edge team is doing and where the browser is going. The new stuff that‘s ready for prime time will be out for the public in the Anniversary Update of Windows 10 this Summer, but some of it is has already landed in Windows Insider builds.


Webmentions

  1. Always looks bigger when you pack it into one place. Sometimes I forget how much we actually get done around here. twitter.com/aarongustafson…

Shares

  1. gregwhitworth
  2. Marcy Sutton
  3. Rob Carr
  4. Amanda J. Rush
  5. George M

Comments

Note: These are comments exported from my old blog. Going forward, replies to my posts are only possible via webmentions.
  1. Šime Vidas
    Microsoft’s Narrator screen reader now supports a “Developer Mode” that blanks out the current app (such as your browser window) in order to enable you to more easily debug accessibility issues.

    Is the idea here to force the developer to use the app only through the screen reader’s speech output, which helps identify accessibility issues?

    1. Aaron Gustafson

      I think "force" might be a bit strong, but yes. It enables us to have a focused accessibility testing experience (yes, using only the speech from Narrator or the displayed text, which is like a position-aware subtitle) without annoying interruptions like having Narrator read the F12 tools when you go to debug an element or view the accessibility tree or switch over to read your email in Outlook.