Thirteen days with a lot of questions
In two weeks from now, it’s very likely that we will have spent 24 hours knowing about the first(1) bricks of Apple’s augmented/mixed reality ecosystem.
Most of the rumours focus on the headset itself, but it’s the OS that I can’t wait to find out about. The industrial designer in me is impatient to know about the solutions they came up with to make it a great product fit for most of us, and the design language it will bear. The AirPod Max has a direct shape filiation with the Watch, but wrapping around the face is a different challenge than covering a set of ears. It might be a moment where the Industrial Design team introduce new shape and material features rather than sticking to the ones we’re used to.
The OS will probably be a complete novelty. The keynote this year may well be one of these rare moments where we get to learn about new interaction metaphors and new ways to interact with information. What’s the equivalent of the desktop? How are we going to select, group, delete, add, browse lists, locate, search, read, and write? I expect eye and finger tracking to be close to magical, as it will be the basis for a fluid relationship with these new metaphors. If the headset ends up projecting content in our immediate physical space one way or another, which I intensely hope it does, the way content and space relate to each other is another aspect I’m impatient about. Can you anchor content to a room, an object, a moving object? Can you associate content to a specific AirTag and gift it to someone? Can you share or send your physical space? Can you use other existing devices such as the Watch, the AirPods and cameras on your iPhone and computers to enrich your experience? What does Time Machine look like? Another aspect I’m expecting a lot from is the integration of these new metaphors with our physical space. To blend in, these objects will have to be shaped and rendered in a way that our brain will understand them as a legit part of the physical world as we know it. But having them behave like real world objects would also be dull. How far interaction designers went to make them feel magical and totally obvious at the same time (as they did with the Dynamic Island for example), that I’m very curious about.
Potential use cases have also been commonly commented about these past weeks, expressing many doubts. It will very much depend on the field of vision the headset will provide: the wider it is, the more use cases will be possible. It might be that I’m blinded by excitement, but from my point of view, there is so much to be done. Imaging sitting at the piano and having the hands of your professor shadowing yours to help you practice the good moves. Learn swimming techniques, correct fitness moves, discover your bones and muscle structure with a superimposed skeleton, first aid assistance training. Get a preview of your house renovations. Work on a planning using a calendar not limited in size by a sheet of paper or the size of a screen. Visualising complex data sets on the surface of a conference room table. Playing a basketball game on a dining room. I’m not a gambler, but if I was I would bet it’s already used internally to monitor supply chain and other complex operations.
One aspect has me a little worried for now is the impact on vision health. Having monitors this close to your eyes for an hour might be ok, but four? I’m old enough to remember working late nights on CRTs, and it wasn’t good. I’m also worried about the tools that will be available to start designing products for this OS. Today I know Sketch, Photoshop, Illustrator and a bit of After Effect. I used to model in Rhino3D and SolidWorks, did some basic renderings there as well. But I’m expecting that I will have to learn SwiftUI if I ever want to be able to build mockups for this OS. We’ll see. Thirteen days.
the Apple Watch and AirPods may end up being the actual first bricks.↩