XR on the Bay Recap: aka the Day Roger Didn’t Get to Meet Silicon Valley’s Gilfoyle (Part 1)
Hi, it’s Rog. On June 26, at Cisco’s Building No. 9, in San Jose, we and the Advanced Imaging Society, hosted our first-ever “XR on the Bay: Where Silicon Valley Meets Hollywood” event. Jim Chabin, President of the AIS, kicked things off with a rather brilliant observation, which summed up the entire endeavor: “Technology empowers artists, and artists empower technology. For the longest time, the technology community existed in Silicon Valley, and the creative community in Hollywood. Those cultures haven’t gotten a chance to know each other quite as well. That effort has now started.”
The gigantic highlight of the day was a Q&A Chabin did in the afternoon with Martin Starr, known currently for his portrayal of the outstandingly sarcastic Bertram Gilfolye, on HBO’s Silicon Valley— to many, us included, a patron saint for geeks. Really, the whole program was spot on, with a range of topics and speakers that illuminated the state-of-the-state of Augmented Reality, Virtual Reality, Deep Learning, and related aspects of Artificial Intelligence.
As it turns out, I (Rog) couldn’t be there, because of a long-standing family commitment. So I dispatched my friend and colleague, the writer Leslie Ellis, to cover it. Afterwards, I asked her to “tell me everything!” She laughed and pointed out that she had about 35 pages of verbatim notes. We decided to go with a Q&A approach for this report. So here is our attempt to be your (and my) ears and eyes for this exceptional and much appreciated event. Please enjoy this edited transcript of a conversation that threaded over a few days. Here goes!
Roger Sherwood: It is killing me (killing me!) that I couldn’t attend this, but I’m glad you were there. Tell me everything! But first of all, what did you think? Good event?
Leslie Ellis: Really good. I learned a ton. Thanks for being out of town. J
RS: Tell me the parts you’ve marked in bold in your notes. I know how you work….
LE: There’s a lot in bold! The first happened early in the day, and really blew my mind. It was in Jonathan Miranda’s keynote. He’s the director of strategy and technology at Salesforce.com. Here’s the quote I bolded to come back to later: “We’re doing a project with DARPA to help them map out a cubic millimeter of a human brain. It’s going to take seventeen years.”
RS: Holy crap. Seventeen years to map a fraction of the brain.
LE: Really puts the scope of it into perspective! He also detailed something called the AutismGlass Project. He has two boys, ages seven and nine, both autistic and on the spectrum. They wear Google Glass to school. The camera on the glasses can immediately identify a person by name, as well as the emotion that person is experiencing. A fellow student falls on the playground: Is he laughing, or is he in pain? He said it was life changing. Plus, it’s happening right now; today. He told the audience that you can sign up for around $100 to try it out.
RS: That’s so cool. Speaking of audience, how was it?
LE: It filled the room. It got a little sparse towards the end of the day, and especially right after Martin Starr left — you can imagine the line that suddenly formed near him for selfies!
RS: OMG I totally can. What threads of conversation were repeated throughout the day?
LE: One was “location-based entertainment.” Another was “volumetric capture.”
RS: Go on….
LE: Let’s see. It started with Gary Radburn, director of workstation, commercial VR/AR, Dell (and who wanted his title to be “director of virtually everything,” nyuck-nyuck.) He described volumetric capture as “objects with real volume.”
Another was Grant Anderson, a very active and well-known AR/VR producer. He described volumetric capture as “where you capture the entire room, and create a mesh … that’s the future, no doubt, but today it’s 10 Gigabits per second of capture. The processing of it can take weeks.” He said it’s also a way to make assets reusable.
RS: What kind of assets?
LE: Like a TV or movie set. Capture it in 360 degrees, to create a 3D model, then re-use that set however and whenever you want. Grant, by the way, also had one of my favorite “aha” lines of the day.
RS: Which was? God I am so bummed I missed this.
LE: It was kind of in two “huh!” moments (again, for me.) One was that “AR will be a daily computing platform; VR will be like going to the movies now, to escape.” The other was that “AR will bring VR along for the ride.” He noted that Apple came out with ARKit before they released the eyewear, “so as to have a developer ecosystem already developing for it.”
Another session that really opened my eyes to what’s going on in AR was the one titled “The Immersive Mind and Body.” It was about the rise of Brain-Computer Interfaces, or BCI. (A new term on me. I’ve been hearing about it as “Human-Machine Interface.” FWIW, I think I like “BCI” better.)
One of the speakers was Dr. F. Kennedy McDaniel, lead operator at Koniku— a Nigerian word that means “has no death.” They’re working on “wetware” — best simplified as silicon that emulates human neurons. One use case she mentioned: Odor detection, for airport security, or early stage cancer detection. The best technology we have now, she said, are dogs: “Dogs are great, but they have some significant limitations. They’re expensive to train, have limited working areas, and you can train them to smell, but they don’t know what smell it is.”
RS: My dog is a doofus.
LE: LOL. Mine thinks everything smells like the squirrel he’ll never catch….so anyway, those intersections between AR and health, really interesting. Up next was your buddy from Walt Disney Studios, Benjamin Havey, VP/Technology. He was aces. Clearly passionate about storytelling. So many interesting little insider-insights that I don’t think many people (tech people, anyway) would’ve known about otherwise!
RS: Such as?
Find out in Part 2!