Skip to main content

Week 2 Update

Greetings!

During this past week, I have engineered the main functionalities of my app - fetch models, display models and project models with Augmented Reality. Here's a rundown of all the details:
  • Fetch models with URL or use the default model - the app displays an interface for picking models. The user can enter the urls themselves, or use the default wigwam model.
  • View and manipulate the model: 
    • Zoom, move and rotate model with gestures
    • Manipulate the color of the model with segmented control
    • Adjust light intensity with slider
    • Change the position of the light with a tap on the scene
    • Settings Page
      • Change the scale factor used for the AR model
      • Pick from light types: omnidirectional, directional, probe, spot or ambient
      • Pick from blend modes: add, alpha, multiply, subtract, screen or replace
      • Animations! Choose from no animation or infinite rotation
  • Project models to the real world with Augmented Reality
    • Wait until the device has found a surface to place the model.
    • This app will highlight the the plane detected by ARKit.
    • Once the plane is shown, tapping the plane will place the AR model on the plane
    • Once the model is loaded, pan gesture can be used to adjust the position of the light projected onto the model.
    • The AR model inherits all the settings from the previous SceneKit view.
Here's a few screenshots of this awesome app:

AR Plane Detection

Model Projected with AR

Model Projected with AR

Regular Model Viewer

Model Picker Page

Settings Page










Comments

Popular posts from this blog

Week 10-11

Hello Reader, During the past 2 weeks, I implemented features which takes advantage of force touch on iPhones, as well as features added to AR APIs in iOS 11.3. The first feature is force touch (known as 3D Touch on iOS). The user can force touch on the app icon on the home screen to quickly access the features that interest them the most (Open from URL, Open Default Model and Open List). This is known as Home Screen Quick Actions on iOS. Another feature I've implemented is peek & pop (force touch to peek at the contents, and press harder to view in full detail). The user can peek at the 3D models saved on device, as well as the contents in the grid view (the server page). For the on-device-model peek, the user can swipe up and view in augmented reality directly. In addition, with the release of iOS 11.3, Apple introduced ARKit 1.5, which introduced some new features. One of them is the ability to project model on a vertical surface. This is the feature I'v...

Week 8

Greetings, During this past week, I've made much progress on both the iOS and Android ports of 3D Visualizer. For iOS, I added a feature which allows the user to import the 3D model from a third party source (e.g. email attachment or iCloud Drive Documents) into the app and visualize it. The user will have options to save/delete the model on device.  I consider this as a very important addition to the features this app provides, as the user can design their wigwaams on the CSDT website, export the STL file, and then visualize it with the app. As for Android, I've started making the UI for this app. Here's a snapshot:                                                      Thanks for viewing!

Week 9

Hi All! This week, I added one of the most promising features into the 3D visualizer app.  Given the link to a blob file containing the 3D model, the user can now download the models created as CSDT Anishinaabe Arcs projects from the server. Subsequently, these models can be visualized with the typical 3d viewer or with Augmented Reality. This feature took advantage of three prominent iOS APIs: Alamofire, JSONSerialization, and NSXMLParser.  First of all, the app fetches the JSON containing links to projects from the server and translates them into a convenient Dictionary data structure with the JSONSerialization API. Given the JSON, the app now has access to a specific XML file containing the link to the 3D model. The app now takes advantage of NSXMLParser to read in the XML and locate the link to the 3D model. Afterward, the app downloads the model and visualizes it.  I really enjoy implementing these features because I am always learning more iOS API...