Need help? Contact us We've detected unusual activity from your computer network To continue, please click the box below to let us know you're not a robot. Why did this
When Snapchat first became a phenomenon among teens in 2012, the focus understandably was on such distinguishing features as its promise of self-destructing photo messages on an internet that would never forget, its winking ghost icon, and how there was no comforting home screen to warm you up to the new experience. Instead, it opened right to the camera that was likely shooting right up your nose. You could only make out how to use the thing by tapping around the screen, or, more likely, asking a (younger) person.
Much of the credit for Snap’s idiosyncratic nature accrued to Snap CEO and cofounder Evan Spiegel, but behind the scenes, the person responsible for making that experience work was Spiegel’s cofounder, Snap CTO Bobby Murphy.
Murphy is the quiet technical genius transforming Snap, building on its roots as a visual messaging app to bring augmented reality into the lives of more than 200 million people without them even realizing it. While everyone was waiting for Magic Leap, Microsoft’s HoloLens, or some other miraculous headset, Snap inceptioned AR upon the populace with silly face filter “Lenses” on the device you already had. Before you knew it, Snap became as well known forits AR dancing hot dogas its self-destructing messages.
Snap went public in 2017 to watch its stock balloon and then rapidly deflate. Investors grew wary in the wake of Instagram, the stoic feed of people’s food photos, successfully copying Snapchat’s brash, visual style and its addictive Stories format (a seamless video that combines all of your friend’s videos into one). While Instagram soared, Snap stagnated.
But then Snap started to turn things around. The company rolled back an unpopular redesign and developed a new Android app that would make the service more functional on a majority of phones around the world. All the while Snap continued to release new enhancements. Location-sharing maps. Lens Studio, which allows anyone to create augmented-reality lens. This spring’s viral gender-swapping filter.
Earlier this year, Snap announced a slew of new plans—from multiplayer games that you could access right through the familiar camera, to an onslaught of new world-facing AR capabilities under the new name Scan, which Murphy is particularly excited about. Snap recently released zanily animated beacons called Landmarkers—the Eiffel Tower can pop like a champagne bottle for tourists. And Scan will include integration with Giphy that will automatically spot something like pizza in your frame and add a pizza GIF. But the platform also offers more practical camera capabilities, too, like solving math problems for you, that hint at Snap’s ambitious future as a functional AR tool.
Murphy’s work in leading the development of these new product initiatives and his continued public iteration of augmented reality right before our eyes is one major reason for the company’s renewed momentum. In its last quarterly earnings, Snap announced, for the first time in a year, that it added 13 million new users, and it boosted revenue by 48% year-over-year (though the company continues to lose over $100 million a quarter). According to Snap, more than 75% of the U.S. population ages 13 to 34 use Snapchat, which outperforms Facebook and Instagram in those demographics.
As Snap stands on the precipice of what could be a storied comeback, I spoke with Murphy, in what appears to be the first interview he’s done since Snap went public in March 2017. We mostly discussed his vision for an augmented reality-led Snap, and he articulated how he conceptualizes Snap as a platform, along with how Snap might scale the idea of a functional, augmented reality world—in a way that Google, Magic Leap, Facebook, and Microsoft can’t.
Fast Company:When Snap first went public, I think the company’s narrative was this promise: “We will out-innovate our rivals, don’t worry. We’re still going to keep doing it like we’ve been doing it.” The market didn’t necessarily agree with that for a long while, but I think two-plus years later, it’s proving out. You keep doing new things. How have you been able to keep that cadence going in terms of product development?
Bobby Murphy:A lot of this is credit to Evan, frankly, and the team that he’s built around himself, our executive team. I think we’re all really a group of people that are extremely motivated by the long-term potential of what Snap is doing, and so that’s remained a focus of ours through the last eight years, since the beginning, is this longer-term mindset. We’ve been fortunate to build a team and a company that is kind of resilient to some of the market perceptions of what we’re doing, and remains committed to investing in this long-term direction.
FC:One thing I’ve been thinking about is Snapchat launched in 2012. Lenses themselves weren’t introduced until about 2015, but now they’ve become really integral to using the app. By 2016, you were investing over $100 million dollars in acquisitions around AR. So, I look at that timeline, and I’m wondering what happened in 2015? Was there an inflection point internally where you said, “AR, there’s something here. AR makes sense, lenses makes sense. We need to go all in with this.”
BM:Actually, it’s fun to hear that timeline played back . . .
FC:Feel free to tell me why!
BM:Well, generally, the idea of AR is actually incredibly aligned to the way that we think about Snapchat as a platform. If you think back to 2012, Snapchat is really about visual communication, this idea of opening up into the camera, and enabling and empowering creative expression through the camera. That kind of naturally led to us investing in creative tools like captions overlaid on images, videos, stickers, drawing tools, and geo filters. I mean, these are all kind of, if you think about it, ways to augment visual input with creative experience.
In that sense, AR is really just kind of an incremental step, only pushing that experience . . . from post-capture into a pre-capture, live camera setting. So even from very early on, we recognized that this kind of organic user engagement around the camera would allow us to do some really interesting things with the camera, including AR from well before we started investing more heavily into the space.
That said, I think what really happened in 2015 was we launched Lenses for the first time, and those started out again as kind of more really purely around creative expression, like hearts on the eyes, and we just saw an incredible amount of excitement and enthusiasm around this product, and then it kind of laid the foundation for all of the things that we’ve done since then, including extending the technical capabilities, the content capabilities, and the ecosystem that is kind of blended today.
FC:Were engagement rates with Lenses just off the charts? Did it surprise you, the response you saw?
BM:I don’t think we were surprised, per se. I’d have to go back and look at the numbers in 2015. But in-line with the way we’ve launched products in the past, it was something we knew we were excited about internally as a product, and a feature, and I think we were excited that our larger audience shared the same enthusiasm that we had.
FC:The idea of framing Lenses as a creative tool in the line of many different creative tools you’ve built makes a lot of sense. Especially think about where AR apps were a few years ago, you saw some interesting one-off experiences on the phone that people hacked together—and that was it. They were all just sort of novelties.
BM:Right. Yeah. A key kind of opportunity for us has been this camera-first application, where there’s already this existing behavior around opening up the camera and taking photos—many, many photos and videos—and communicating through them. We talk about it a lot, this idea of being a camera company. But fundamentally what that means is that, as opposed to what I would maybe call a photo-sharing service like most everything else out there in the camera space, the behavior is really around capturing and creating—in a lot of cases, an order of magnitude—more visual content than you would share anywhere else. That kind of natural, organic engagement has given us an opportunity to layer on these kinds of creative and imaginative experiences in a way that feels very natural to the evolution of our product.
FC:I am constantly trying to categorize what Snap is other than the tagline of “a camera company.” Thinking back to what you talked about, the camera is the touchpoint for all experiences, and then you have Lenses on top of the camera that are now, like you said, ushering in new things you can do: multiplayer gaming, solving math problems, shopping. Plus you have community-made Lenses and spots in your AR Scan platform. So you’re to some extent crowdsourcing new functions. What’s the mental anchor you’re using as you add more complications to Lenses and Scan? To me—and correct me on this if you hate this metaphor—Lenses have almost become apps, and 3D face filters are almost like an app icon. They might usher you into something totally different.
BM:Broadly speaking, yes, we view Lenses as really a highly functional framework for doing a wide range of things. In the same way that we layered creative experiences onto the existing camera engagement, now that we have this engagement with AR, we can start to layer on and expand use cases. It’s really about going on this journey with our users. We’re in this sort of, I wouldn’t call it a transition period, but it’s certainly like an expansionary period. Currently, the camera is used predominately for taking photos and communicating with your friends. We see a future world in which camera communication becomes one of many different things.
Just on Lenses alone, as you pointed out, most of the Lenses we see help you express yourself, or be creative with your friends. But increasingly, they’re about experiencing the world itself, or interacting with the world in which the sharing of it becomes secondary, or kind of unnecessary to the value of that experience, if that makes sense.
FC: With the camera becoming this mega-app platform that other developers and artists can plug into, was that the idea from the start? Or did you kind of scale to a place where you couldn’t do all of this on your own?
BM:We started with a lot of elements of this idea of a platform from the very beginning, and obviously the magic and excitement of doing anything in this space is how you get there. If you think about the distant future of AR in which you can, on whatever device, look out into the world and everything comes alive in some sort of meaningful, or useful, or valuable way, for that world to exist, we need a tremendous amount of content to be built specifically with that interface interaction in mind. We know that the best way to do that is to take the same tool that we built ourselves internally to build some wonderful, creative use cases, and expand that out into a broader ecosystem.
In a lot of ways, if you think of the face right now as the most mature, robust input to an AR experience that exists in the world—which I think you’d probably agree that it is—then one of our missions is to turn everything else in the world into an equivalently understandable experience.
FC:What should we do with everything else in AR? The classic example always seems to be, “I don’t know, maybe directions to the subway that superimpose on the street.” You’ve releasedworld lenses, and some other things that are immediately interesting, but they don’t have that same sort of stickiness that the face stuff has. It’s like, what else?
BM:If you said to me, “The only thing AR will ever amount to was this beautiful new way to experience someone else’s imagination through your own eyes,” that would be enough for us! The face has a lot of advantages, obviously. Everybody has one, so it’s immediately accessible. It’s very understandable from both a creator perspective in terms of, how do I build content on a face? and also a user perspective of,how do I access—and use—AR content that’s built for my face?
A lot of the opportunity we have with trying to flip that out into the real world is, it’s not just about computer vision applied to understanding the world, but it’s also doing that in connection and in collaboration with the creator mindset such that you can think about how do you build use cases around the understanding? Then similarly, how do you build awareness in users around it?
I think what we’ve done with our Landmarkers is a great early step in that direction, where it’s incredibly easy for a user to understand what you can do with a Landmarker, and it’s incredibly easy for a creator to understand what you can do with a Landmarker. The challenge, of course, is that there aren’t that many of them yet, and we’ll continue to scale those up, but eventually we hope to get to a place where there are sufficient experiences out in the world that the users have an awareness of what they can and can’t scan, and someday that gets to everything. Doing this in a categorical, or incremental, way has been our approach.
FC:Your AR headset,Spectacles, were a cultural phenomenon when they hit, but they didn’t reach sales expectations.Spectacles 2kind of felt like they were neither a cultural phenomenon nor a market phenomenon. What have you learned from that internally around what might have gone wrong with Spectacles?
BM:We have this long-term vision of computing overlays onto the world, and this idea that applications move from 2D screens out into the spatial and 3D space. We know that hardware is going to be a major component of realizing that future, and so Spectacles, and our Snap Labs team that builds Spectacles, is a big part of our goal. The different iterations of Spectacles are