Eric Eldon and I are at the Google I/O conference in San Francisco, where Google is launching a new product called Google Wave today as a “developer preview”. The goal of the product is apparently to reinvent email/online communication. We’ll update this post throughout the keynote speech.

[Update: Read my later post to get a better sense of what Wave is and and why it’s exciting.]

Vice President of Engineering Vic Gundotra told the audience to “Keep telling yourself it’s an HTML 5 application.”

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

9:05am: It looks a bit like an instant messaging feed, with a tree (i.e., threaded) structure of messages.

9:07am: Messages show up live, character-by-character, rather than having to hit “done” as in instant messaging.

9:09am: You can actually turn this feature off, so your messages don’t show up until you hit done.

9:10am: There’s a feature called “Playback,” which is basically a recording of the conversation as messages appeared.

9:12am: Parts of the conversation can be private.

9:13am: How do attachments work? Just drag them into the conversation. And you already see preview images as the attachments are loading. The audience is applauding a lot. “Don’t be shy guys … we can handle any amount of applause.”

9:14am: You can drag and drop groups of photos too. You can also extract photos from an exiting wave and post them in a new one.

9:15am: What kinds of APIs? For one thing, you can embed Wave into a blog.

9:19am: Comments left on a blog also show up in your Wave account, allowing other users to participate in the conversation.

9:20am: That means you don’t have to check a dozen different sites to see a conversation proceed, “which will make flame wars so much more effective.”

9:21am: You can also embed Wave into social networks. You can use contacts other than Wave contacts. ANd you can search for conversations by contact (even within the embedded wave).

9:23am: Now they’re demonstrating Wave on Android and iPhones. Demo is a bit dodgy, because they can’t get a good network connection.

9:24am: Another example of how editing of messages works with embedded Waves: You can edit a caption of an attached photo, and it updates live across embedded waves.

9:25am: Users can edit messages from other users, but you can also view the markups, so it’s clear who wrote what. (It’s sort of a Wikipedia editing style, but for personal communication.)

9:27am: You can collaboratively edit documents, plus any other message in the tree.

9:30am: Not only can you extract images, but also the current version of a Wave. This is useful for extracting the current version of a document that you’ve been collaborating on.

9:31am: Also possible to merge waves/documents.

9:32am: The “content model” is “extendable,” meaning that you can bring the same collaboration model to spreadsheets, preentations, etc.

9:33am: Here’s the “hardest thing” the team was working on: You can have four people editing a document next to each other (like, editing the same line), and it still seems to work. It shows you exactly what people are doing.

9:44am: You could also look at the notes from a live meeting (that you’re not attending) and drop in comments or questions. This also supports right-to-left languages.

9:35am: This is built completely in the Google Web Toolkit. The team wrote it in Java, then used the Toolkit to translate it into HTML 5.

9:37am: Allowing Wave to work from desktop browsers and mobile browsers only adds 5 percent of extra work.

9:38am: How do you organize Waves? You can use folders and saved searches, and you can also use tags (which are shared with everyone else in a wave). You can also create a wave of waves, which is “the most powerful way to organize waves.”

9:40am: Looking for the right balance between “speed and not being interrupted too often.”

9:41am: Extensions! Third-party extensions will be “first-class citizens” on wave. Now demonstrating a spell checker that takes the context of the word into account. It can detect situations where “bean” is more likely to be the the correct spelling, vs. situations where “been” is the correct spelling. “Pretty cool, huh?”

9:42am: Also sophisticated understanding of what is/isn’t a link, turning the link on and off as you type.

9:44am: The spell checker uses APIs to sit on the Wave server to watch as you type. Ditto the link detector.

9:45am: External versions of these APIs are available to third-party developers. “Almost all the power of the internal APIs.”

9:46am: Examples of third-party extensions built with external APIs. Any OpenSocial gadget can sit inside a wave.

9:47am: In an active conversation, it might be hard to find the relevant information, so it might make sense to just edit the initial message rather than just creating a thread. In the example, a Google engineer created a “Yes, No, Maybe” gadget for RSVPs in a Wave.

9:48am: You can also build games as gadgets in a wave. Demonstrating Sudoku on-stage.

9:49am: Demonstrating playback with gadgets. For example, with a chess gadget, you could step through the whole game.

9:50am: You can also embed and edit maps in a Wave.

9:51am: You can also have forms in waves. Starting to think that the rest of the rest of this demonstration is going to consist of, “Hey, you want to do this in a Wave? Well, you totally can.”

9:56am: Re: my above comment, a Wave can also become a Twitter client. Replies in Wave are also translated into @replies.

9:59am: Another possible extension: Using a Wave to debug while programming. A Wave can be integrated with an “Issue tracker” to file and communicate about different bugs.

10:00am: “Do you guys feel inspired yet?” Um, yes.

10:04am: Now showing a branded version of a corporate Wave.

10:05am: Even with Waves on different systems, messages still update in real-time.

10:07am: If you do a branded wave on a corporate server and start a private conversation, those messages never leave the corporate server.

10:08am: Website going up later today with a four white papers describing how it works and a public discussion forum. Most of the Wave code will be open sourced soon.

10:10am: Real-time translation of messages into other languages, too.

10:11am: “That finishes our demo.” Standing ovation from the crowd, which I would have joined in if I wasn’t busy typing. “If you guys really liked it, you could do a wave.”

10:14am: Wrapping up the presentation now. If you want to learn more, check out the Wave site. We’re also working on a more post highlighting the important of the announcement, and I’ll probably be posting from a the post-keynote press conference too.

10:20am: In conclusion, I WANT THIS NOW.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More