AR Food, from plate to Augmented Reality
How to make a 3D model of an object from everyday life that can later be consumed as an Augmented Reality effect on Instagram, Facebook or Snapchat?
That was the question I had a few months ago.
I have seen similar projects and I was thrilled with what edible AR food looks like. I set myself the task of learning how doing it.
The goal is to make an “edible” 3D model that will be displayed in AR.
In my many years of work, I have experience with various graphics processing programs, raster and vector, the entire Adobe package, as well as with various 3D programs, which certainly helped me in the process of making 3d models. As you’ll read below, I have used multiple applications throughout the process of creating this Augmented Reality. Ten years ago I worked in Bryce 3D for the first time, and then Cinema 4D, 3DS Max, … However, I chose Blender for this project because I have been using it a lot lately and it has proven to be a very powerful tool, and best of all, it is free.
Want to read this story later? Save it in Journal.
The whole process consists of several parts so we will go through them in order.
- Photographing an object
- Photo processing
- Convert photos to 3D model
- Model processing in 3D program
- Optimization and preparation for AR
- Creating Augmented Reality in Spark AR (Facebook / Instagram) or Lens Studio (Snapchat)
- View on Instagram, Facebook or Snapchat
Photogrammetry is the science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena. Wikipedia
I first tried to take photos in two photo studios with professional equipment from fellow photographers for the reason that I get superb sharp photos without shadows and high resolution. After several attempts at studio photography and photography with a mobile phone, I came to the conclusion that it is possible to take photos with minimal equipment, ie with a mobile phone. I used a Samsung S20, 1440 x 3200 pixels, 20: 9 ratio (~ 563 ppi density), 12 MP. Along with it, I also used a light ring purchased on AliExpress and a table LED lamp covered with white paper to get diffused light.
The cell phone was mounted on a tripod with a light ring, and the object is on a table on a rotating tray called SNUDDA (Lazy Susan), bought at IKEA. I had previously painted the tray black, however, a fellow photographer subsequently pasted pieces of pick tape on it that served as markers and this proved to be a good idea.
The objects I photographed were up to 40 cm in diameter, although it could realistically be done with larger dimensions.
After setting the set, stand, light ring, cell phone, turn on and adjust the top light, the object is placed in the middle of the tray and make a full circle to check that everything fits in the frame.
I used Pro mode on myphoto phone that has the ability to take RAW photos, it’s important that later photos can be processed better.
I took photos in portrait mode so that after each photo the tray moved by about 10 degrees. Since Samsung S20 offers the option of voice navigation or recording, I have never said the word “Shoot” so many times in a period of ten minutes :)
For each full 360 ° circle, I took approximately 30 - 40 photos. After the round, I tilted the object and repeated the full circle so that the entire surface is covered, and so that the model is uninterrupted, ie that the application for converting to 3D can easily and better recognize it.
Note, in this case more means better, although in some cases I got almost identical results and with fewer photos per lap, it all depends on the model itself.
Also, depending on the subject being photographed, I sometimes raised the tripod and “tilted” the camera down to cover the “top view”.
When the object is covered from all sides, top, side, bottom, photos are imported into Adobe LightRoom for cropping and processing. Cropping is essential to remove all excess from the frame. In processing, I tried to remove any shadows and get a good color of the object.
When the photo is of satisfactory quality, the settings are applied to everything in the folder, and there can be 200 - 400 of photos.
Now the photos are ready to convert into 3D model. This is the most interesting procedure and until all the photos are imported and show the outlines of the 3d model, you are not sure if the photo shoot was successful :)
For this step, I tested a lot of applications, mostlyAlice Vision Meshroom and Reality Capture.
Meshroom is a free application and it is necessary to make a good preparation and setup for the result to be of good quality, however after a few scans I decided to work with Reality Capture. It is true that it is chargeable, but the price depends on the facility and can range from a few to several tens of dollars. Also, Reality Capture is faster compared to Meshroom, but again it depends on the power of your computer (the stronger the better, the better the processor and graphics and enough RAM definitely help).
There are a lot of photogrammetry applications on the market like Autodesk ReCap, Agisoft Metashape, Pix4D, PhotoModeler Technologies, Regard3D, Trimble Inpho, WebODM, 3DF Zephyr and so on.
When a 3D model is created it is exported in the formats you need for further processing.
I exported model to .obj and then it went to Blender where I further cleaned and possibly repaired it, closing “holes” that couldn’t be closed in applications that had been used before. In some cases, it was necessary to use a Meshmixer with which the model can be aligned and further “modeled”.
When I was satisfied with the 3d model, the final export goes from the Blender and is exported to the fbx format, which is the least optimized version of the 3d model. Why does it matter?
Instagram, just like Facebook or Snapchat, supports a maximum effect size of 4MB, and for that reason it is important to stay in those dimensions, less is better.
Of course, if you are modeling in a 3D program, the models themselves are smaller and sometimes there will be no need for optimization, but since we are doing a scan, this step is necessary.
From Blender we have a model in fbx format and if it is still too big or if we want to put several 3D models in one Instagram filter, all together, including the texture must be less than 4MB.
For this reason, I set out in search of the application and techniques of additional “ironing” of the model. Since a separate article could be written only about this, we will stop with this part. I found a great application (paid) called Mantis and with its help I was able to further optimize the 3D models.
As for the quality, it depends on several parameters, the number of triangles, etc. So for each model it is necessary to perform individual optimization.
Once you’re happy with the result you can switch to the Augmented Reality Effects app for Instagram Spark AR, or Lens Studio for Snapchat. There is a tool for making Augmented Reality for TikTok called Effector, however it isn’t currently available in Croatia where I’m from, so we will write about it in the future when it becomes available. (You can allways use VPN as workaround)
Textures and Shaders
The texture of the model, which can consist of one or more graphics, is usually in -png or .jpg format and can be further optimized. Also, its size in pixels can be reduced, say from 4096 x 4096 px to 2048 x 2048 px or to 1024 x 1024 px. Of course, reducing drastically loses quality, so it is necessary to find the ideal ratio of quality and size. The main texture is the one that is displayed on the object, and with it you can have the so-called. Shaders, Diffuse, Bumped or Specular.
After processing the texture in Photoshop, I definitely recommend additional image optimization with a tool called ShortPixel, which I normally use when optimizing photos for the web. ShortPixel can be used for free, and gives top results, without losing quality.
I’ve been using the Spark AR since it came out and I’ve been testing a variety of features, however, from the very beginning I’ve been fascinated by 3D models showing up in the real world.
After importing the optimized model or more into Spark AR, you need to set them on stage, define the size, whether they will be animated or static, add light and more before posting the effect on Instagram or Facebook. A handy feature that exists is the creation of filters or effects for Facebook Ads, which means that you can create a Facebook ad that natively (after a click) displays your AR filter or effect, so you can promote it.
Once the effect is ready and posted on the Spark AR Hub, one should wait a while for it to be approved. It used to take several days, even weeks, but lately it has been literally a day or sometimes an hour for the filter to be acceptable and ready to use.
Each filter you post must comply with the rules, of which there are several, so read all about it here.
Within Spark AR HUB filters / effects you can limit the time, eg that they are displayed only for Valentine’s Day, say 14.02. after which they “disappear”. In addition, there is the possibility of updating the filter, which means that you can upgrade or completely change an already published filter and upload a new filter, and after it becomes active, it is in place (and url) of the old one.
This feature is handy if you need frequent (or infrequent) changes.
Own 3D model or purchased / borrowed
In my case, I imported my own 3D model after pre-processing although you can import absolutely any 3d model that you made yourself or bought / downloaded from a any website.
Before importing into Spark AR, it is important that the model is of satisfactory quality and not too big, ie less is better. One of the best services with 3D models is certainly Sketchfab, but you can also use models from services such as CG Trader, Turbo Squid, etc.
Once Instagram approves your filter, you can try it out right away and share it with everyone to enjoy as much as you do.
Equipment I used
- Samsung S20
- Light Ring
- LED lamp
- Lazy Susan
- Adobe LightRoom
- Meshroom / Reality Capture
- Spark AR or Lens
- Instagram / Facebook / Snapchat
Present your AR
And what now that you have the Augmented Reality ready, how do you place it further and show it to potential clients or customers?
One of the more convenient ways is a QR code that you can apply to a website, print on a sticker or poster, a business card.
You can easily create a QR code directly from the Google Chrome browser by clicking on the icon on the url to generate a QR code and download.
A simple URL https://www.instagram.com/ar/251221106448037/ which opens the filter on the platform that displays it, such as Instagram Big Mac is also sufficient.
By default, your filter is displayed on your (or client’s) Instagram Business profile.
You need to click on the tab with the Smile icon where you can view all the Instagram filters that you previously posted via Spark AR HUB.
Also, you can share your filter with Instagram Story, you need to record short video using filter and then it is possible to attach filter URL on that Story.
It is in short the process of creating an Augmented Reality filter for Instagram.
I would definitely like to hear your opinion on this topic and whether you see an application of Augmented Reality in your or your clients business.
PS. check out my AR blog https://vrarcro.com/ 😄
📝 Save this story in Journal.