PhotoCatch - Photogrammetry Application
Whilst exploring the different creative potential technologies that are available to use, we were introduced to Photogrammetry. There are many different ways of exploiting this technology through different applications and techniques. Whilst doing some research, I discovered a specially made application for M1 processor Macs and iOS devices named PhotoCatch.
This ingenious software has been optimised to take advantage of the M1’s processor and specially designed Machine Learning (ML) onboard that uses previous data and also data from the SoC (System on a Chip) to make the 3d Photogrammetry model quicker and also more detailed and precise. The application allows the user to make a model in various different modes, wireframe all the way unto RAW. Then takes advantage of higher resolution cameras to manipulate the data and make textures look so real they may as well be lifelike. One advantage of this application is that it allows the users to save the models in various different formats that can be read and furthermore manipulated by other software such as Blender. Furthermore, the 3D models can then be viewed in AR using apples ARKit implementation, putting the items into 3D space with ease.
When first loading the software, I tested it out on a Coffee mug. The first try was nothing short of abysmal. I quickly discovered that in order for the application to make something hyper-realistic there needed to be tonnes of data for the software to play with. This meant I needed to take 3 to 4 times as many images as I had originally. When creating an object using photogrammetry you need to move around the object slowly and take as many images from every single angle as possible whilst also maintaining correct lighting. Then you can enhance the detail of the textures by slowly rotating around the object again whilst slowly getting closers as well. Luckily, PhotoCatch doesn’t require the user to upload the images in sequential order, otherwise, it would have been a very long and slow process. Once I took a LOT more images, I put it through the software again and was pleasantly surprised to see a 3D replica of the coffee mug that actually looked very decent. (Attached Below)
On my second attempt, I took tonnes of images of my car that I tried to put through photo catch, this is where I came across an issue that makes a lot of sense if you give it some thought. Glass. Does. Not. WORK. Whilst taking pictures of my car it did not occur to me that the windows would not be recognised by the software as there was too much data that would be changing throughout each image. Once I placed the car through the software it became apparent how glass is something that isn’t able to be read by the ML application. Surprisingly, however, the model of the car came out pretty good, it was just ruined by the windows looking like something hell would produce.
Lens Studio - Snapchat AR Filters
Lens Studio, like many of the other applications I have used, is a piece of freeware software that is designed to be used for the social media platform Snapchat. This software uses a mix of Augmented Reality, Virtual Reality, 3D Modelling and Machine Learning in order to create different types of “lenses’ that can be used by Snapchat users across the world.
I have had the privilege of using this software before for another course which has expanded my knowledge on how to use it. As a test, I put out a couple of filters on the platform, to my surprise they were extremely successful, with nearly 2 million users across them all in total. This fuelled me to try and incorporate this software into a final project.
I created a tester lens by mixing a few other software together. I started in photoshop with a mockup of what I would like as my end goal and then worked from there. Creating all the assets that I would need inside of photoshop and then importing them into Lens Studio, I made a VHS Glitch style filter that was inspired by 80’s movie posters. The application is very intuitive, which is good as it leaves little room for error, even if a beginner.
Many features of Augmented Reality, like Head, Face and Hand tracking is all built-in, so all you need to do is simply turn the feature on and make sure the application knows what to track then away you go. I designed a simple head tracking filter with some graphics designed in photoshop that looks great.
Next, I would like to test out their World Tracking and Virtual Reality features as I believe that this may be a great addition to my final project. Building in a tracked scene in Unreal and then teaching Lens Studio to recognise it and prompt a user to use a specially designed filter in Snapchat to get an even better experience using Mixed Reality.
Blender - 3D Graphics Software
Blender is a free and open-source piece of software that allows users to take advantage of their GPU in order to make 3D graphics. The software enables people to sculpt and render animations and still images with incredible detail (if their computer is powerful enough).
Whilst testing out the software, I was able to create a pretty basic render of some text. I wasn’t very pleased with the result. However, I discovered that Blender has many different Render Engines that you can choose from and each of them has a different result, depending on what you’re trying to achieve. I went back and looked at some more tutorials in order to create something that was a bit more detailed, but also something that my computer could handle. I decided on making a stack of 3D squares that would rotate. On top of this, I would include an HDRI into the environment whilst also changing the material to be metallic and reflective. This was extremely demanding on the computer and took around 4 hours for a single image render. However I am extremely pleased with the results, this fuelled me to try some more designs. Secondly, I attempted at making some rings that would create an odd shape, which I succeeded in, I actually like this render even more. Lastly, I discovered that you can import 3D objects from anywhere into Blender and manipulate them, so I decided to change the material and sizes of a set of flowers I found online, this turned out pretty well.
Whilst reading up about Blender I discovered that you can import your Photogrammetry 3D scans into the software and furthermore manipulate them, something that I tried with the Cup I scanned in PhotoCatch. The results after messing around with the materials and the sculpting features were less than desirable, however, I think this may be because my knowledge of the software isn’t very much, with practice I believe some potential is definitely there to be made.
Lastly, the software enables you to export your 3D models into Unreal Engine and then turn them into whatever you want, whether it be for Game or Film purposes. This gives me some ideas as to how I could potentially use PhotoCatch, Blender and Unreal all together in order to create some environments that would be realistic, yet artistic and also hyper-unreal! I’d like to continue to use and learn Blender in order to help develop a prototype over the year.
Unreal Engine & Quixel MegaScans
Unreal Engine is a game engine that enables users to create high-fidelity, photorealistic graphics. The Engine can create renders alongside life-like physics and animation. This tool uses unique methods to take advantage of their C++ code to create cinematic graphics that are used in high budget productions such as Disney’s “The Mandolorian”. Due to the Engine being quick in render times, it has enabled film production times, pre and post, to be cut by 20/30% (Good 2020). During our lectures, we were introduced to the software and advised on how to get started with the basics and cinematics. Then given time to have a go at creating something with it.
Using assets from Epic Games own catalogue “Quixel Bridge”. It is effortless to create a highly detailed scene in an hour or so. Due to the Engine’s results being so good with little complicated work, I would love to experiment further with the software to build something more detailed and possibly implement it within a final project. Whilst creating a simple example project, I was able to add objects into the scene with extreme ease. Furthermore, there is a very intuitive cinematic mode that enables me to create a sequence that looks and feels like it could of easily taken months for an animation studio to build.
Whilst building this environment, I thought to myself, why not try and place some of the photogrammetry work I created into the scenery. This is incredibly easy with PhotoCatch, Blender and Unreal. After exporting the model of the coffee cup I created, I thought it would be quite comical and “pop art” for me to include a very random yet, in my opinion, perfectly placed oversized LJMU coffee cup into an environment that I can only describe as an entrance to the Grand Canyon. Once I had created the environment that I was pleased with, it was then time to play around with the cinematics. The end result is quite choppy, however with some practise it is definitely something to perfect easily.
If I was to use this software in a further more advanced project I would definitely use it to take advantage of its AR/VR capabilities. Unreal has hundreds of plug-ins that allow the software to do a plethora of things. One of which is turn the environment you create into something that can quite easily be used using a VR headset or an iOS device. Exploiting this feature is something I’d definitely want to look into as creating and experience that is fully immersive, yet using 3D graphics and photogrammetry seems so futuristic and at the same time blends modern day technology together.