How to create a simple AR app (ARKit & SceneKit)
My AR demo let’s you place Cheeseburgers on your table using ARKit and SceneKit.
Step 1: Create Xcode project
First you will need to create a new Augmented Reality App. As Content Technology you can use SceneKit and as language Swift.
When the project is created you will see 3 functions that are already created for you, the viewDidLoad, viewWillAppear and the viewWillDisappear.
The function viewDidLoad is called when the controller’s view is loaded and ready. viewWillAppear notifies the controller that the view is going to be added to the hierarchy and viewWillDisappear notifies the controller that the view is going to be removed from the hierarchy.
Step 2: viewDidLoad function
This function loads the view and let’s you detect gestures. In this example I used the tap gesture to place the food in the environment. Later on we will add the didTap function.
override func viewDidLoad() { super.viewDidLoad()
sceneView.delegate = self let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(didTap(_:)))
sceneView.addGestureRecognizer(tapGesture)}
Step 3: viewWillAppear function
In this function we create the configuration for the tracking. This can change depending on what the functionality of your app will be, when making a face filter you will use the Face tracking but in our example we’ll use World tracking to track our environment.
When using World tracking we need to specify which surfaces it has to detect. This can be done with the .planeDetection property, a property that attempts to detects flat surfaces. We will specify it to horizontal because we don’t want our food sticking to the wall.
More info on Face tracking: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
sceneView.session.run(configuration)
}
Step 4: didTap function
When a user taps the GestureRecognizer will detect this and play this function. Within this function we get the coordinates from the touch and save them as a SCNVector3, a representation of a three-component vector. (x,y,z)
This position can then be send to the function addItemToPosition.
func didTap(_ gesture: UITapGestureRecognizer) {
let sceneViewTappedOn = gesture.view as! ARSCNView
let touchCoordinates = gesture.location(in: sceneViewTappedOn)
let hitTest = sceneViewTappedOn.hitTest(touchCoordinates, types:
.existingPlaneUsingExtent)
guard !hitTest.isEmpty, let hitTestResult = hitTest.first else {
return
} let position =
SCNVector3(hitTestResult.worldTransform.columns.3.x,
hitTestResult.worldTransform.columns.3.y,
hitTestResult.worldTransform.columns.3.z) addItemToPosition(position)}
Step 5: addItemToPosition
In this function we take this SCNVector3 position and add it to the scene. This is also the function in which we specify which model will be added.
func addItemToPosition(_ position: SCNVector3) {
guard let url = Bundle.main.url(forResource:
"burger",withExtension: "usdz",subdirectory: "art.scnassets") else
{
return
}
let scene = try! SCNScene(url: url, options: [.checkConsistency:
true]) DispatchQueue.main.async {
if let node = scene.rootNode.childNode(withName: "burger",
recursively: false) {
node.position = position
self.sceneView.scene.rootNode.addChildNode(node)
}
}
}