When it came to building a mobile-web-browser Augmented Reality proof of concept in less than four weeks, we knew two things:
We knew we would run into a whole bunch of unexpected challenges.
And we knew it would get done.
Software development in a time crunch is almost guaranteed to come with eleventh-hour surprises, late nights, and frazzled nerves. And that’s pretty much exactly what happened.
Luckily, the Valtech and Contentstack teams building our Augmented Reality demo do not rattle easily.
The team had come up with a concept for a content-rich AR showcase in less than one week; the week after that, became subject-matter experts on the topics of beauty and skincare, and determined the must-haves for a usable and working AR POC, and finally had designed and developed live interactions in AR, including how the data would be structured and pulled from the CMS, last week.
Here’s how it all came together in week three — as we raced to build our working Augmented Reality prototype that would help a retail beauty (skincare) buyer navigate supermarket shelves and browse through a brand’s products to receive a personalized recommendation; take a product home and get onboarded to using it; and finally get recommendations for repurchasing, changing usage, and/or leaving a review.
And spoiler alert: yes, we did all of that, and yes, you can try out the AR app for yourself.
Integration, integration, integration
“If this were Sesame Street, the word of the week would be: Integration!” quipped Danielle, our project manager, at the start of week three.
Week two had consisted of building all the individual ‘parts’ of the application in small ‘samples’ — little scenarios that could, in parallel, all be shown to work. This included things like: designing the scenarios in 2D; displaying the scenarios in AR to look like they did in 2D; programming the app to recognize the bottle moving as a controller for making the experience change in AR; pulling data from Contentstack, and so on. “Integration!” meant actually combining all of it together — is it any surprise that we were expecting to run into some weirdness?
Design: Fusing brand with functionality
The plan all along was to show the app working with three “generic serums”: To simulate one beauty brand’s different serum product offerings, and thus illustrate how a customer could browse between them using the AR experience in-store.
The thing is, Svante, our designer, had been working on beautiful serum labels, while Alex, our developer, had been figuring out how to make the information we needed display (and persist) in AR using clear markers. (More on that in our week two post.) Since we chose to work with fiducial markers, which are essentially big black boxes with asymmetrical shapes or content inside, Svante’s task became to fuse — or integrate — these marker “boxes” with a custom, and beautiful skincare-like, label in the beginning of the week.
Needless to say he was up to the challenge, and he created three beautiful label designs that worked seamlessly in the AR app. So people could access the AR experience, and it looked like a real beauty product. Check!
Print these labels at home to try out our AR app.
The rest of the week was a game of expectation vs. reality for design. Prior to this week, Svante had only been designing in 2D, so once the designs went “live” into the 3D AR experience, Svante needed to make quick adjustments on the fly so things could look and work the way they were supposed to.
For example, we had originally planned to have a “recommended” ribbon in Scenario 1 which appears wrapped around the bottle that the app recommends for the user accessing the experience. It turned out that it’s pretty tricky to wrap a 2D object around a 3D one, so our wrapped ribbon tails turned into more of a crown.
Development: Will it run?
A-frame skeletons
The week started with Alex building out the A-frame skeletons for the AR experience. Danielle explains:
“For rendering content in 3D space, we used A-frame and AR.js libraries (see the research on the different frameworks we considered here). AR.js is the Augmented Reality component — it makes use of the camera to do computer vision, recognizes markers, and places content on top of the real world. A-frame allows us to describe a 3D scene with HTML-like components. Essentially, you can tell the experience, “there’s going to be text here, and a graphical element there, and something else here”. A-frame can also be used to define gestures, like the rotation and tilt of the bottle. It’s a higher-level programming language than if you were to go straight into WebGL and try to define all these components and sections. So before we could actually start working with the content from Contentstack and the visuals and assets from Svante, we had to actually lay out the skeleton, or template, for where all those pieces would go.”
Integrating HTML elements into the AR experience
On the HTML side of things, we added some cool graphical HTML elements to the experience, especially noticeable in Scenario 1, where a pop-up in the lower third of the screen gives a fuller context to the shopping experience. It can tell you which part of the “skincare routine” you are shopping and lets you save selections.
Here is where we started to hit some road bumps. Blending the AR.js together with the HTML elements turned out to be trickier than we expected. We discovered that AR.js, as written, adds A-frame elements to the document body, and then sets the size of the body to the dimensions of the web cam, which makes it tricky to integrate properly positioned HTML elements atop an AR scenario.
It wasn’t planned, but the team ended up forking the AR.js code and making a local branch that fixed this issue, so that we could render 2D HTML elements and our 3D AR content as expected.
Note: We love and are fully committed to the open-source community and its practices, and as a next step, we plan to commit these changes as a pull request back to the AR.js library. |
Another thing that we ran into was some issues with tapping gestures, because raycasting (which is how you can click on objects in a 3D scene) was not working properly. This was due to some customizations in the AR.js “scene camera” setup (the view into the 3D world). Once again, we knew how we could fix it, but we didn’t have time in this final week of development. Luckily, the team came up with the idea of using a “swipe” gesture instead, and this worked really reliably and felt natural in use.
Note 2: These kinds of issues could also be resolved by working with “native” AR tools, i.e. ones that leverage Android and iOS AR frameworks. We chose to use a mobile-web-browser-based experience to make user access as seamless as possible. We knew this would come with tradeoffs, and these are just a couple of examples. Building mobile-web AR experiences is still a little bit of a “wild west”, and we are all still learning about how to make them better. |
Text height
An unfortunate reality of working with 3D graphics is that unlike 2D browser renderers, A-frame (and a lot of other 3D libraries) don’t automatically figure out dimensions of elements and make them flow and stack within a document-object model. Which was of course an issue when getting text from Contentstack that might be dynamic, changing, or simply not come with a known “text height”. This was a time constraint that drove us toward the decision to hardcode the text layouts in an effort to complete the work in time for the end of week three. However, this issue has since been addressed and corrected through a few sneaky post-week-three hours.
Parcel.js issues
We also had some weird, unexpected bugs with our build tool Parcel.js, such as it being unable to use our custom fonts (fixed through hosting the font files on Contentstack assets), and also referencing HTML files incorrectly (addressed by debugging the command-line parameters for Parcel to make sure the paths in the built files were generated correctly). We figured it out, but it was another eleventh-hour surprise... exactly the kind that is relatively expected in fast-clip software development!
Reusing markers
When it came to actually using the application, we wanted to put everything into a single HTML file, accessible from one button, so that we would only have to ask for permissions to use the camera and the sensors once, and leverage a lot of the same elements. The problem was that re-using a marker and associating it with different 3D content for different scenarios was really tricky and created conflicts.
This would have required a lot of code to make work that we didn’t have time for, so for now, each of the three scenarios are separate documents and require unique permissions — though they can all be accessed through in-experience buttons once you’ve launched a scenario.
And finally...
We did it!
The team pulled together and with the help of Ben & Gal at Contentstack, Alex, Jason, and Svante pulling a few final late nights, and everyone else cheering them on with fingers crossed... we built the thing. It’s working and it’s live.
Check out the demo video below.
And of course, try it for yourself: Go to spyglass.valtech.engineering, print the labels, and see the magic happen!
Stay tuned for our lessons learned summary with lots more details on how we built this Augmented Reality demo, what enterprises need to know about building out AR experiences, and why we believe there’s endless opportunity to explore emerging technologies with MACH (microservices, API-first, cloud-native, and headless) architecture.